One of the main features envisioned and requested is the ability to augment the threat intelligence and enrichment processes with insights derived from machine learning or statistical models. The challenges with this sort of infrastructure are
To support a high throughput environment that is manageable, it is evident that
To support these requirements, the following components have been created:
The maas_service.sh script starts the Yarn application which will listen for requests. Right now the queue for the requests is a distributed queue stored in zookeeper for convenience.
./maas_service.sh usage: MaaSClient -c,--create Flag to indicate whether to create the domain specified with -domain. -d,--domain <arg> ID of the timeline domain where the timeline entities will be put -e,--shell_env <arg> Environment for shell script. Specified as env_key=env_val pairs -h,--help This screen -j,--jar <arg> Jar file containing the application master -l,--log4j <arg> The log4j properties file to load -ma,--modify_acls <arg> Users and groups that allowed to modify the timeline entities in the given domain -mc,--master_vcores <arg> Amount of virtual cores to be requested to run the application master -mm,--master_memory <arg> Amount of memory in MB to be requested to run the application master -nle,--node_label_expression <arg> Node label expression to determine the nodes where all the containers of this application will be allocated, "" means containers can be allocated anywhere, if you don't specify the option, default node_label_expression of queue will be used. -q,--queue <arg> RM Queue in which this application is to be submitted -t,--timeout <arg> Application timeout in milliseconds -va,--view_acls <arg> Users and groups that allowed to view the timeline entities in the given domain -zq,--zk_quorum <arg> Zookeeper Quorum -zr,--zk_root <arg> Zookeeper Root
The maas_deploy.sh script allows users to deploy models and their collateral from their local disk to the cluster. It is assumed that the
./maas_deploy.sh usage: ModelSubmission -h,--help This screen -hmp,--hdfs_model_path <arg> Model Path (HDFS) -lmp,--local_model_path <arg> Model Path (local) -l,--log4j <arg> The log4j properties file to load -m,--memory <arg> Memory for container -mo,--mode <arg> ADD, LIST or REMOVE -n,--name <arg> Model Name -ni,--num_instances <arg> Number of model instances -v,--version <arg> Model version -zq,--zk_quorum <arg> Zookeeper Quorum -zr,--zk_root <arg> Zookeeper Root
Model as a service will run on a kerberized cluster (see here for instructions for vagrant) with a caveat. The user who submits the service will be the user who executes the models on the cluster. That is to say that user impersonation of models deployed is not done at the moment.
Two Stellar functions have been added to provide the ability to call out to models deployed via Model as a Service. One aimed at recovering a load balanced endpoint of a deployed model given the name and, optionally, the version. The second is aimed at calling that endpoint assuming that it is exposed as a REST endpoint.
Let’s augment the squid proxy sensor to use a model that will determine if the destination host is a domain generating algorithm. For the purposes of demonstration, this algorithm is super simple and is implemented using Python with a REST interface exposed via the Flask python library.
Now let’s install some prerequisites:
Start Squid via service squid start
Now that we have flask and jinja, we can create a mock DGA service to deploy with MaaS:
This service will treat yahoo.com and amazon.com as legit and everything else as malicious. The contract is that the REST service exposes an endpoint /apply and returns back JSON maps with a single key is_malicious which can be malicious or legit.
The following presumes that you are a logged in as a user who has a home directory in HDFS under /user/$USER. If you do not, please create one and ensure the permissions are set appropriate:
su - hdfs -c "hadoop fs -mkdir /user/$USER" su - hdfs -c "hadoop fs -chown $USER:$USER /user/$USER"
Or, in the common case for the metron user:
su - hdfs -c "hadoop fs -mkdir /user/metron" su - hdfs -c "hadoop fs -chown metron:metron /user/metron"
Now let’s start MaaS and deploy the Mock DGA Service:
Now that we have a deployed model, let’s adjust the configurations for the Squid topology to annotate the messages with the output of the model.
{ "parserClassName": "org.apache.metron.parsers.GrokParser", "sensorTopic": "squid", "parserConfig": { "grokPath": "/patterns/squid", "patternLabel": "SQUID_DELIMITED", "timestampField": "timestamp" }, "fieldTransformations" : [ { "transformation" : "STELLAR" ,"output" : [ "full_hostname", "domain_without_subdomains", "is_malicious", "is_alert" ] ,"config" : { "full_hostname" : "URL_TO_HOST(url)" ,"domain_without_subdomains" : "DOMAIN_REMOVE_SUBDOMAINS(full_hostname)" ,"is_malicious" : "MAP_GET('is_malicious', MAAS_MODEL_APPLY(MAAS_GET_ENDPOINT('dga'), {'host' : domain_without_subdomains}))" ,"is_alert" : "if is_malicious == 'malicious' then 'true' else null" } } ] }
{ "enrichment" : { "fieldMap": {} }, "threatIntel" : { "fieldMap":{}, "triageConfig" : { "riskLevelRules" : [ { "rule" : "is_malicious == 'malicious'", "score" : 100 } ], "aggregator" : "MAX" } } }
Now we need to start the topologies and send some data: