Skip to main content

Falco: Monitorización y Detección de Anomalías en Tiempo Real

falco

github

Documentación

Es una herramienta de seguridad opensource desarrollada por Sysdig y bajo el paraguas de la CNCF, diseñada para monitorizar y detectar actividades sospechosas en entornos de contenedores y Kubernetes. Actúa como un "IDS" (Intrusion Detection System) para containers, proporcionando una capa de seguridad runtime que inspecciona y analiza el comportamiento de procesos, conexiones de red, llamadas de sistema (syscalls) y otras actividades dentro de los containers y del propio host.

El análisis estático es útil, pero solamente cuando ejecutamos el código podemos tener "certeza" de que estamos seguros. Observando lo que se está ejecutando, qué procesos se están creando, quién está haciendo qué y otros tipos de eventos en tiempo de ejecución, podemos tener tranquilidad de que estamos seguros.

Por eso es necesaria una solución de seguridad que corre side by side con nuestras aplicaciones, monitorizando lo que está ocurriendo, alertando cuando algo sospechoso suceda. Mejor aún sería prevenir antes de la alerta, pero es algo que Falco no hace. La prevención que Falco ofrece es la observabilidad a bajo nivel para que tú, así es, tú, mejores la próxima vez, probablemente utilizando otras herramientas. Puede ser considerado el último estadio del juego que responda a la pregunta: ¿Cómo ha ocurrido esto?.

Algunas empresas ofrecen soluciones comerciales construidas sobre Falco, ofertando lo que él no se propone hacer: la prevención.

Con Falco conseguimos hacer:

  • Detección de amenazas: Monitoriza eventos del sistema en tiempo real, como la ejecución de procesos, alteraciones en archivos sensibles y actividades de red. Cuando detecta un comportamiento que no corresponde a las reglas de seguridad definidas, genera alertas.

    • Privilege escalation usando containers privileged.
    • Cambio de namespace usando tools como setns.
    • Read/Write para directorios conocidos como /etc, /usr/bin/ usr/sbin...
    • Creación de links simbólicos.
    • Cambio de owners y permisos en archivos.
    • Conexiones inesperadas o socket mutations.
    • Procesos generados usando execve.
    • Ejecución de shells como sh, bash, csh, zsh, etc.
    • Ejecución de ssh como ssh, scp, sftp, etc.
    • Mutación en los ejecutables coreutils de Linux.
    • Mutación en los logins.
    • Mutación en ejecutables de shadowutil o passwd como shadowconfig, pwck, chpasswd, get passwd, change, useradd, etc.
  • Basado en Syscalls: Utiliza el módulo eBPF (Extended Berkeley Packet Filter) o ptrace para recoger syscalls del kernel Linux, ofreciendo una visión profunda de lo que está ocurriendo dentro de los containers y en el propio host.

  • Reglas de Seguridad Personalizables: Viene con un conjunto de reglas de seguridad predefinidas para entornos Kubernetes y Docker, como detección de ejecuciones de shell interactivo dentro de containers o creación de binarios en directorios críticos. Estas reglas pueden ser personalizadas para adecuarse al entorno específico.

  • Integración con el Ecosistema de Seguridad: Puede ser integrado con otros sistemas de seguridad y herramientas de observabilidad, como Elastic Stack, Prometheus, Grafana, o SIEMs (Security Information and Event Management), facilitando la recogida y análisis de logs y alertas.

  • Acciones Automáticas: Soporta acciones automatizadas como respuesta a eventos de seguridad, como enviar alertas a canales de comunicación (Slack, Discord, Email), activar webhooks, pero para ejecutar scripts personalizados necesita ayuda de otras herramientas como Sidekick.

    • Falco Sidekick es un proyecto que permite conectar varias instancias de Falco en diferentes lugares en un punto central y reenvía a diferentes salidas
      • Chat (Slack, Telegram, Teams, Discord, Rocketchat, etc)
      • Metrics (Datadog, InfluxDB, Prometheus, Dynatrace, etc )
      • Alerts (AlertManager, Opsgenie, GrafanaOnCall, etc)
      • Logs (Loki, Grafana, Syslog, ElasticSearch, etc)
      • Storages (S3, GCP Storage...)
      • Serveless (Lambda, Knative, Kubeless, Functions, Tekton)
      • Message Queue (MQTT, Kafka, RabbitMQ, AWS SQS, etc)
      • Email
      • Database (Redis)
    • A través de estas varias integraciones es posible desarrollar una solución que pueda ejecutar algún script para mitigar el problema como por ejemplo parar el container que está sufriendo un posible ataque.
  • Fácil Integración con Kubernetes: Fácilmente desplegado como un DaemonSet o Deployment, permitiendo monitorizar todos los nodos de un cluster y verificar actividades sospechosas incluyendo auditoría.

  • Extensible a plugins: Algunos plugins extienden su funcionalidad.

Vale la pena resaltar que Falco no hará tu entorno más seguro, pero te hará consciente cuando algo sospechoso suceda.

Algunos Casos de Uso

  • Detectar Ejecuciones de Shell Inesperadas: Alertas para ejecuciones de shell interactivas en containers de producción.

  • Alteraciones en Binarios Críticos: Monitorización de alteraciones en binarios importantes como /usr/bin o /bin.

  • Modificación de Configuraciones Sensibles: Detectar alteraciones inesperadas en archivos de configuración de Kubernetes, como kube-apiserver.yaml.

  • Accesos a Redes Externas Sospechosas: Monitorizar y alertar sobre conexiones de red anómalas que puedan indicar una violación de seguridad.

A pesar de que Falco es más utilizado en entornos contenerizados también puede ser instalado en hosts que no ejecutan containers runtimes ya que se enfoca en syscalls.

Instalación

Docs Instalación

El modo más seguro de instalar Falco es directamente en los nodes durante el bootstrap para que no pueda ser eliminado por usuarios de kubernetes y quedar totalmente aislado como servicio y no como container gestionado por kubelet. Agentes desplegados como daemonset (un pod por node) pueden leer las alertas de cada node disparadas por Falco.

En el caso de clouds y kubernetes una buena idea es crear una imagen de VM, en AWS sería el AMI, con Falco pre-instalado para que el escalado de nuevos nodes sea más rápido. Queda como sugerencia el uso de Packer para esta tarea.

Si tienes un cluster con buenas reglas de seguridad, usuarios con permisos correctos y el sistema no es muy crítico, creo que la instalación de Falco como container no traerá problemas y ya será un buen aprendizaje para que más adelante ejecutes una instalación siguiendo la mejor práctica citada arriba.

Para que no sea necesario ir node a node, hacer cambios en el cluster en este momento, vamos a instalar vía Helm para que sea más fácil.

helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
# Vamos a descargar Falco completo para análisis del values.yaml. Es bueno para conocer lo que se puede hacer.
helm pull falcosecurity/falco --untar

Antes de instalar vamos a comprobar qué es el K8S Audit


K8s Audit

Falco ofrece también un conjunto de reglas para eventos de auditoría en kubernetes que antiguamente venía dentro del archivo k8s_audit_rules.yaml. Esto fue migrado al plugin k8saudit. En el propio helm de Falco podemos hacer el deploy usando el values que habilita este plugin junto con las reglas por defecto.

Kubernetes tiene un sistema de auditoría que registra actividades y eventos dentro del cluster. Estos logs de auditoría proporcionan una pista de auditoría detallada sobre quién hizo qué y cuándo, ayudando a monitorizar e investigar actividades sospechosas o no autorizadas.

El plugin k8saudit de Falco permite que Falco reciba y procese estos eventos para detectar actividades sospechosas o no autorizadas.

Para esto necesitamos cargar el plugin que cargará el antiguo k8s_audit_rules.yaml y habilitar un service que escucha en un puerto para que sea enviado vía webhook el evento a Falco.

Si analizamos los posibles values que tenemos podemos observar estos.

root@cks-master:~# cd falco/
root@cks-master:~# ls falco/values*
root@cks-master:~/falco# ls -l values*
-rw-r--r-- 1 root root 2136 Sep 9 20:40 values-gvisor-gke.yaml
# Falco solamente con las reglas de auditoría
-rw-r--r-- 1 root root 1710 Sep 9 20:40 values-k8saudit.yaml
# Auditoría con análisis de syscalls
-rw-r--r-- 1 root root 1930 Sep 9 20:40 values-syscall-k8saudit.yaml
# values por defecto.
-rw-r--r-- 1 root root 71084 Sep 9 20:40 values.yaml

Aplicando el values-syscall-k8saudit.yaml vamos simplemente a sobrescribir los valores que nos interesan en el values.yaml que contiene solamente las reglas por defecto poniendo también el k8saudit para funcionar. Podemos también alterar la salida al formato json que quedará muy bien para enviar a otras tools ya parametrizado.

Instalando...

# con K8s audit + reglas por defecto
root@cks-master:~/falco# helm install --replace falco --namespace falco --create-namespace --set tty=true --set falco.json_output=true falcosecurity/falco --values values-syscall-k8saudit.yaml

# sin k8s audit
root@cks-master:~/falco# helm install --replace falco --namespace falco --create-namespace --set tty=true --set falco.json_output=true falcosecurity/falco

# o solo con k8s audit si ese fuera el propósito
root@cks-master:~/falco# helm install --replace falco --namespace falco --create-namespace --set tty=true --set falco.json_output=true falcosecurity/falco --values values-k8saudit.yaml

También es posible ajustar el envío de métricas a Prometheus y otras opciones. Estudia el values.yaml que vale la pena.

Si estás ejecutando en un cluster local de estudio como kind, minikube o microk8s es bueno revisar aquí pues algunas configuraciones pueden ser necesarias.

Falco está configurado a través del archivo falco.yaml que en el caso de helm es un configmap montado en /etc/falco/falco.yaml en cada uno de los pods. En el caso de la instalación directa en el host el camino es el mismo.

Observarás que la mayoría de los parámetros de este config están definidos en el propio values.yaml alterando esos parámetros en el configmap.

root@cks-master:~/falco# k --namespace falco exec -it pods/falco-mx8jd -- sh -c "cat /etc/falco/falco.yaml"

# Voy a hacer algunos comentarios abajo en el configmap
Defaulted container "falco" out of: falco, falcoctl-artifact-follow, falco-driver-loader (init), falcoctl-artifact-install (init)
base_syscalls:
custom_set: []
repair: false
buffered_outputs: false
config_files:
- /etc/falco/config.d
engine:
kind: kmod
kmod:
buf_size_preset: 4
drop_failed_exit: false
falco_libs:
thread_table_size: 262144
file_output:
enabled: false
filename: ./events.txt
keep_alive: false
grpc:
bind_address: unix:///run/falco/falco.sock
enabled: false
threadiness: 0
grpc_output:
enabled: false
http_output:
ca_bundle: ""
ca_cert: ""
ca_path: /etc/falco/certs/
client_cert: /etc/falco/certs/client/client.crt
client_key: /etc/falco/certs/client/client.key
compress_uploads: false
echo: false
enabled: false
insecure: false
keep_alive: false
mtls: false
url: ""
user_agent: falcosecurity/falco
json_include_output_property: true
json_include_tags_property: true
json_output: true # Ajustado también
libs_logger:
enabled: false
severity: debug
load_plugins: # Habilitamos estos plugins en values con k8saudit que fue reflejado aquí.
- k8saudit
- json
log_level: info
log_stderr: true
log_syslog: true
metrics:
convert_memory_to_mb: true
enabled: false
include_empty_values: false
interval: 1h
kernel_event_counters_enabled: true
libbpf_stats_enabled: true
output_rule: true
resource_utilization_enabled: true
rules_counters_enabled: true
state_counters_enabled: true
output_timeout: 2000
outputs:
max_burst: 1000
rate: 0
outputs_queue:
capacity: 0
plugins:
- init_config: ""
library_path: libk8saudit.so
name: k8saudit
open_params: http://:9765/k8s-audit
- init_config: ""
library_path: libjson.so
name: json
priority: debug
program_output:
enabled: false
keep_alive: false
program: 'jq ''{text: .output}'' | curl -d @- -X POST https://hooks.slack.com/services/XXX'
rule_matching: first
rules_files: # También fue añadido k8s_audit_rules aquí por alteraciones en values de helm.
- /etc/falco/falco_rules.yaml
- /etc/falco/k8s_audit_rules.yaml
- /etc/falco/rules.d
stdout_output:
enabled: true
syscall_event_drops:
actions:
- log
- alert
max_burst: 1
rate: 0.03333
simulate_drops: false
threshold: 0.1
syscall_event_timeouts:
max_consecutives: 1000
syslog_output:
enabled: true
# time_format_iso_8601 = YYYY-MM-DDTHH:MM:SSZ usando el horario del host
time_format_iso_8601: false # << Ajusta el formato de hora si quieres
watch_config_files: true
webserver:
enabled: true
k8s_healthz_endpoint: /healthz
listen_port: 8765
prometheus_metrics_enabled: false # << Habilita el endpoint para prometheus
ssl_certificate: /etc/falco/falco.pem
ssl_enabled: false
threadiness: 0

Vamos a intentar desplegar un container "malicioso" que expone un secret y se puede acceder por shell.

root@cks-master:~# k run pod --image=httpd -oyaml --dry-run=client > malicious.yaml
root@cks-master:~# vim malicious.yaml
root@cks-master:~# cat malicious.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod
name: pod
spec:
containers:
- image: httpd
name: pod
resources: {}
env:
- name: SECRET
value: "asdf1234"
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
root@cks-master:~# k apply -f malicious.yaml
pod/pod created

#Vamos a comprobar dónde está ejecutándose este pod
# Node worker
root@cks-master:~# k get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod 1/1 Running 0 99s 192.168.1.38 cks-worker <none> <none>

# Accediendo al shell de este pod
root@cks-master:~# k exec -it pod -- bash
root@pod:/usr/local/apache2# exit
exit

# Como tenemos falco como daemonset ejecutará un pod por node vamos a comprobar la salida del pod falco que corre en el worker.
root@cks-master:~# k -n falco get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
falco-mx8jd 2/2 Running 0 48m 192.168.0.17 cks-master <none> <none>
falco-wvb64 2/2 Running 0 48m 192.168.1.15 cks-worker <none> <none>

root@cks-master:~# k logs -n falco pods/falco-wvb64
Defaulted container "falco" out of: falco, falcoctl-artifact-follow, falco-driver-loader (init), falcoctl-artifact-install (init)
Mon Sep 9 20:54:43 2024: Falco version: 0.38.2 (x86_64)
Mon Sep 9 20:54:43 2024: Falco initialized with configuration files:
Mon Sep 9 20:54:43 2024: /etc/falco/falco.yaml
Mon Sep 9 20:54:43 2024: System info: Linux version 5.15.0-1067-gcp (buildd@lcy02-amd64-117) (gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0, GNU ld (GNU Binutils for Ubuntu) 2.34) #75~20.04.1-Ubuntu SMP Wed Aug 7 20:43:22 UTC 2024
Mon Sep 9 20:54:43 2024: Loading plugin 'k8saudit' from file /usr/share/falco/plugins/libk8saudit.so
Mon Sep 9 20:54:43 2024: Loading plugin 'json' from file /usr/share/falco/plugins/libjson.so
Mon Sep 9 20:54:43 2024: Loading rules from file /etc/falco/falco_rules.yaml
Mon Sep 9 20:54:43 2024: Loading rules from file /etc/falco/k8s_audit_rules.yaml
Mon Sep 9 20:54:43 2024: Hostname value has been overridden via environment variable to: cks-worker
Mon Sep 9 20:54:43 2024: The chosen syscall buffer dimension is: 8388608 bytes (8 MBs)
Mon Sep 9 20:54:43 2024: Starting health webserver with threadiness 2, listening on 0.0.0.0:8765
Mon Sep 9 20:54:43 2024: Loaded event sources: syscall, k8s_audit
Mon Sep 9 20:54:43 2024: Enabled event sources: k8s_audit, syscall
Mon Sep 9 20:54:43 2024: Opening 'k8s_audit' source with plugin 'k8saudit'
Mon Sep 9 20:54:43 2024: Opening 'syscall' source with Kernel module
{"hostname":"cks-worker","output":"21:42:35.665718441: Notice A shell was spawned in a container with an attached terminal (evt_type=execve user=root user_uid=0 user_loginuid=-1 process=bash proc_exepath=/usr/bin/bash parent=containerd-shim command=bash terminal=34816 exe_flags=EXE_WRITABLE container_id=7c8973f32410 container_image=docker.io/library/httpd container_image_tag=latest container_name=pod k8s_ns=default k8s_pod_name=pod)","priority":"Notice","rule":"Terminal shell in container","source":"syscall","tags":["T1059","container","maturity_stable","mitre_execution","shell"],"time":"2024-09-09T21:42:35.665718441Z", "output_fields": {"container.id":"7c8973f32410","container.image.repository":"docker.io/library/httpd","container.image.tag":"latest","container.name":"pod","evt.arg.flags":"EXE_WRITABLE","evt.time":1725918155665718441,"evt.type":"execve","k8s.ns.name":"default","k8s.pod.name":"pod","proc.cmdline":"bash","proc.exepath":"/usr/bin/bash","proc.name":"bash","proc.pname":"containerd-shim","proc.tty":34816,"user.loginuid":-1,"user.name":"root","user.uid":0}}
root@cks-master:~#

# Tenemos el log en json entonces vamos a hacer la salida bonita.
# Vimos que alguien abrió un shell en aquel container.

root@cks-master:~# k logs -n falco pods/falco-wvb64 | grep Notice | jq .
Defaulted container "falco" out of: falco, falcoctl-artifact-follow, falco-driver-loader (init), falcoctl-artifact-install (init)
{
"hostname": "cks-worker",
"output": "21:42:35.665718441: Notice A shell was spawned in a container with an attached terminal (evt_type=execve user=root user_uid=0 user_loginuid=-1 process=bash proc_exepath=/usr/bin/bash parent=containerd-shim command=bash terminal=34816 exe_flags=EXE_WRITABLE container_id=7c8973f32410 container_image=docker.io/library/httpd container_image_tag=latest container_name=pod k8s_ns=default k8s_pod_name=pod)",
"priority": "Notice",
"rule": "Terminal shell in container",
"source": "syscall",
"tags": [
"T1059",
"container",
"maturity_stable",
"mitre_execution",
"shell"
],
"time": "2024-09-09T21:42:35.665718441Z",
"output_fields": {
"container.id": "7c8973f32410",
"container.image.repository": "docker.io/library/httpd",
"container.image.tag": "latest",
"container.name": "pod",
"evt.arg.flags": "EXE_WRITABLE",
"evt.time": 1725918155665718500,
"evt.type": "execve",
"k8s.ns.name": "default",
"k8s.pod.name": "pod",
"proc.cmdline": "bash",
"proc.exepath": "/usr/bin/bash",
"proc.name": "bash",
"proc.pname": "containerd-shim",
"proc.tty": 34816,
"user.loginuid": -1,
"user.name": "root",
"user.uid": 0
}
}
# Solo para constar, el falco que está monitorizando el control-plane no acusará nada, pues este pod no estará ejecutándose en aquel node.

Si quieres filtrar directamente todos los pods Falco podemos usar el selector para que sea más fácil.

# Cogiendo el log de todos los containers con el label app.kubernetes.io/name=falco
kubectl -n falco logs --selector app.kubernetes.io/name=falco --container falco | grep -E 'Notice|Critical|Error|Warning'

Sintaxis VS Ramp Up VS Reglas

La sintaxis de las reglas es simple y muy directa al punto. La lectura de las reglas es muy fácil, sin embargo, escribir las reglas exige un conocimiento profundo en qué procesos se están ejecutando, cuáles de ellos son legítimos, cuáles no lo son y sus direcciones. No voy a mentir que es un poco complicado y aburrido pues exige un conocimiento de bajo nivel en sistemas operativos. Vamos a levantar la mano al cielo y agradecer a las personas que contribuyeron a este proyecto.

Reglas

Las reglas poseen un formato simple. Vamos a verificar la regla que acabamos de violar usando el container que creamos.

Todos los campos abajo son OBLIGATORIOS, menos el tag. Existen otros parámetros que son opcionales.

- rule: Terminal shell in container # Nombre de la regla
desc: > # Descripción
A shell was used as the entrypoint/exec point into a container with an attached terminal. Parent process may have
legitimately already exited and be null (read container_entrypoint macro). Common when using "kubectl exec" in Kubernetes.
Correlate with k8saudit exec logs if possible to find user or serviceaccount token used (fuzzy correlation by namespace and pod name).
Rather than considering it a standalone rule, it may be best used as generic auditing rule while examining other triggered
rules in this container/tty.
# spawned_process y container son macros, veremos más adelante
condition: > # La parte que importa
spawned_process
and container
and shell_procs
and proc.tty != 0
and container_entrypoint
and not user_expected_terminal_shell_in_container_conditions
# Salida que vemos
output: A shell was spawned in a container with an attached terminal (evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty exe_flags=%evt.arg.flags %container.info)
priority: NOTICE # Nivel
# Tags sirven para filtrado, pero no es obligatorio
tags: [maturity_stable, container, shell, mitre_execution, T1059]

Mira la salida output que tenemos en el json y compara con lo que tenemos arriba.

  "output": "21:42:35.665718441: Notice A shell was spawned in a container with an attached terminal (evt_type=execve user=root user_uid=0 user_loginuid=-1 process=bash proc_exepath=/usr/bin/bash parent=containerd-shim command=bash terminal=34816 exe_flags=EXE_WRITABLE container_id=7c8973f32410 container_image=docker.io/library/httpd container_image_tag=latest container_name=pod k8s_ns=default k8s_pod_name=pod)",

Prioridades

La prioridad indica la gravedad de la situación. Es semejante a lo que se conoce de syslog. Siempre se incluye en el mensaje sea cual sea el tipo de output elegido, output/json.

Las prioridades están en orden decreciente de prioridad. Cuando se elige monitorizar a partir de una prioridad se monitoriza desde ella hacia arriba.

  1. EMERGENCY: Indica una situación de emergencia que requiere acción inmediata. Es el nivel más alto de prioridad. Ejemplo, escalado de privilegios.
  2. ALERT: Problema serio que debe ser resuelto inmediatamente, aunque puede no ser tan urgente como una emergencia.
  3. CRITICAL Representa una condición crítica que necesita ser corregida rápidamente para evitar problemas más graves.
  4. ERROR: Si una regla está relacionada con el estado de escritura (por ejemplo, sistema de archivos, etc.). Se puede esperar.
  5. WARNING: Si una regla está relacionada con una lectura no autorizada, es decir, lectura de archivos no permitidos.
  6. NOTICE: Si una regla está relacionada con un comportamiento inesperado como por ejemplo la generación de un shell en un contenedor, apertura de una conexión de red extraña, etc.
  7. INFORMATIONAL: Si una regla está relacionada con el comportamiento contrario a las buenas prácticas; contenedores privilegiados, contenedores con montajes con archivos no permitidos, ejecución de comandos interactivos como root, etc
    • DEBUG

Params

Otros parámetros para las reglas:

ParamDescripciónValor por Defecto
exceptionsUn conjunto de excepciones que hacen que la regla no genere una alerta.
enabledSi se define como false, una regla no será cargada ni comparada con ningún evento.true
tagsUna lista de tags aplicadas a la regla (más sobre esto aquí).
warn_evttypesSi se define como false, Falco suprime avisos relacionados con una regla que no tiene un tipo de evento (más sobre esto aquí).true
skip-if-unknown-filterSi se define como true, si una condición de regla contiene una verificación de filtro, por ejemplo fd.some_new_field, que no sea conocida por esta versión de Falco, Falco aceptará silenciosamente la regla, pero no la ejecutará; si se define como false, Falco reportará un error y existirá al encontrar una verificación de filtro desconocida.false
sourceLa fuente del evento para la cual esta regla debe ser evaluada. Valores típicos son syscall, k8s_audit, o la fuente anunciada por un plugin de fuente.syscall

Las tags son usadas para categorizar las reglas en grupos pudiendo una regla pertenecer a varios grupos. Usando falco command line podemos dar comando para activar o desactivar reglas con determinada tag en vez de ir una a una.

Tags

Las tags que vamos a encontrar en las reglas por defecto son:

TagDescripción / referencia
filesystemPara lectura/escritura de archivos
software_mgmtAplicada a cualquier herramienta de gestión de software/paquete, como rpm, dpkg, etc.
processInicio de un nuevo proceso o alteración del estado de un proceso actual
databaseLa regla se refiere a bases de datos
hostSolo funciona fuera de los contenedores
shellCuando existe inicialización de shells
containerSolo funciona dentro de contenedores
cisRelacionada con el benchmark CIS Docker
usersGestión de usuarios o alteración de la identidad de un proceso en ejecución
networkActividad de red

Macros y Listas

Para reutilizar código podemos escribir partes que son comunes a varias reglas.

- macro: container
condition: container.id != host
- macro: spawned_process
condition: evt.type = execve and evt.dir = <

Las macros arriba definen condiciones que fueron usadas abajo para facilitar la lectura de la regla y reutilización de código.

- rule: shell_in_container
desc: notice shell activity within a container
condition: >
spawned_process and
container and
proc.name = bash
output:
...

Sería equivalente al yaml.

- rule: shell_in_container
desc: notice shell activity within a container
condition: >
evt.type = execve and evt.dir = < and
container.id != host and
proc.name = bash
...

Las listas también pueden ser reutilizadas.

- list: shell_binaries
items: [bash, csh, ksh, sh, tcsh, zsh, dash]

- list: userexec_binaries
items: [sudo, su]

- list: known_binaries
items: [shell_binaries, userexec_binaries]

# Misma cosa que known_binaries
- list: known_binaries_equal
items: [bash, csh, ksh, sh, tcsh, zsh, dash, sudo, su]

- macro: safe_procs
condition: proc.name in (known_binaries)

# Misma cosa que safe_procs
- macro: safe_procs_equal
condition: proc.name in (bash, csh, ksh, sh, tcsh, zsh, dash, sudo, su)

Para creación/modificación de las condiciones, como mencionado anteriormente, será necesario que entiendas un poco de syscalls y hagas una lectura bien pacientemente de:

Reglas por defecto

syscall rules github

Ahora sabiendo los conceptos arriba podemos sumergirnos en las reglas por defecto.

Como se habló anteriormente Falco viene con algunas reglas predefinidas, pero no impide que mejoremos añadiendo más reglas si es necesario.

Vale recordar que las reglas pueden causar falsos positivos ya que muchas de ellas pueden no aplicarse a tu caso de uso siendo necesario activar/desactivar o modificar algunas. Por eso creo que vale la pena mantener todo el conjunto de reglas por defecto aquí mismo para ser estudiado cuando sea necesario en vez de abrir el container para ver lo que tenemos.

Si es necesario activar/desactivar reglas cómo controlar las reglas [aquí]

Vamos a comprobar lo que tenemos en las reglas.

# Cogiendo un pod cualquiera aquí de falco y comprobando el /etc/falco/falco_rules.yaml
root@cks-master:~# k --namespace falco exec -it pods/falco-gntpx -- sh -c "cat /etc/falco/falco_rules.yaml"
Defaulted container "falco" out of: falco, falcoctl-artifact-follow, falco-driver-loader (init), falcoctl-artifact-install (init)
# Voy a poner la salida en el yaml abajo para que quede más bonito.

Podemos observar que se definen varias macros y listas que serán reutilizadas en las rules después. No te asustes con el tamaño del archivo, ve leyendo despacio y con calma solo lo que interesa inicialmente.

En falco.yaml se incluirán los archivos que contendrán las reglas por defecto y ya apunta otros que podemos utilizar para reglas personalizadas.

# Voy a quitar un poco de la salida y dejar solo lo que interesa
root@cks-master:~# k --namespace falco exec -it pods/falco-wvb64 -- sh -c "cat /etc/falco/falco.yaml"
...
rule_matching: first
rules_files:
- /etc/falco/falco_rules.yaml
- /etc/falco/k8s_audit_rules.yaml
- /etc/falco/rules.d

A ordem dos rules_files importa. Caso uma mesma regra seja definida nos proóximo arquivos da lista, esta regra sobreescreverá a anterior. É desta maneira que fazermo alteração nas regras. No arquivo values.yaml temos o seguinte campo:

customRules:
{}
# Aunque Falco viene con un conjunto de reglas predeterminado muy bueno para detectar
# comportamientos extraños en contenedores, nuestros usuarios van a personalizar los
# conjuntos de reglas de seguridad en tiempo de ejecución o políticas para las imágenes
# de contenedor específicas y las aplicaciones que ejecutan. Esta funcionalidad puede
# ser manejada en esta sección.
#
# Ejemplo:
#
# rules-traefik.yaml: |-
# [ cuerpo de la regla ]

Si creásemos un rule-traefik.yaml con el cuerpo de la rule estaría dentro del directorio rules.d, y si esta regla existiese anteriormente, tanto en falco_rules como en k8s_audit_rules, sería sobrescrita, pues fue el último archivo en sobrescribir la regla.

# La información sobre tags y campos de reglas se puede encontrar aquí: https://falco.org/docs/rules/#tags-for-current-falco-ruleset
# El elemento inicial en los campos `tags` refleja el nivel de madurez de las reglas introducidas según la propuesta https://github.com/falcosecurity/rules/blob/main/proposals/20230605-rules-adoption-management-maturity-framework.md
# Los campos `tags` también incluyen información sobre el tipo de inspección de carga de trabajo (host y/o contenedor), y las fases de killchain de Mitre Attack y código(s) TTP de Mitre
# Referencias de Mitre Attack:
# [1] https://attack.mitre.org/tactics/enterprise/
# [2] https://raw.githubusercontent.com/mitre/cti/master/enterprise-attack/enterprise-attack.json

# A partir de la versión 8, el motor de Falco soporta excepciones.
# Sin embargo, el archivo de reglas de Falco no las usa por defecto.
- required_engine_version: 0.31.0

# Actualmente deshabilitado ya que read/write son syscalls ignoradas. Las verificaciones
# similares open_write/open_read comprueban archivos siendo abiertos para
# lectura/escritura.
# - macro: write
# condition: (syscall.type=write and fd.type in (file, directory))
# - macro: read
# condition: (syscall.type=read and evt.dir=> and fd.type in (file, directory))

- macro: open_write
condition: (evt.type in (open,openat,openat2) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0)

- macro: open_read
condition: (evt.type in (open,openat,openat2) and evt.is_open_read=true and fd.typechar='f' and fd.num>=0)

# Intentos fallidos de apertura de archivos, útil para detectar actores de amenazas cometiendo errores
# https://man7.org/linux/man-pages/man3/errno.3.html
# evt.res=ENOENT - No existe el archivo o directorio
# evt.res=EACCESS - Permiso denegado
- macro: open_file_failed
condition: (evt.type in (open,openat,openat2) and fd.typechar='f' and fd.num=-1 and evt.res startswith E)

# Esta macro `never_true` se usa como marcador de posición para ajustar subexpresiones lógicas negativas, por ejemplo
# - macro: allowed_ssh_hosts
# condition: (never_true)
# puede ser usada en la expresión de una regla con doble negación `and not allowed_ssh_hosts` que efectivamente evalúa
# a verdadero y no hace nada, la plantilla vacía perfecta para casos `lógicos` en oposición a plantillas de listas.
# Al ajustar la regla puedes sobrescribir la macro con algo útil, ej.
# - macro: allowed_ssh_hosts
# condition: (evt.hostname contains xyz)
- macro: never_true
condition: (evt.num=0)

# Esta macro `always_true` es la contraparte de la macro `never_true` y actualmente está comentada ya que
# no se usa. Puedes usarla como marcador de posición para una plantilla de macro de ajuste de subexpresión lógica positiva
# macro, ej. `and custom_procs`, donde
# - macro: custom_procs
# condition: (always_true)
# más tarde puedes personalizar, sobrescribir las macros a algo como
# - macro: custom_procs
# condition: (proc.name in (custom1, custom2, custom3))
# - macro: always_true
# condition: (evt.num>=0)

# En algunos casos, como eventos de llamadas al sistema descartados, la información sobre
# el nombre del proceso puede faltar. Para algunas reglas que realmente dependen
# de la identidad del proceso realizando una acción como abrir
# un archivo, etc., requerimos que el nombre del proceso sea conocido.
# TODO: Por el momento mantenemos la variante `N/A` para compatibilidad con archivos scap antiguos
- macro: proc_name_exists
condition: (not proc.name in ("<NA>","N/A"))

- macro: spawned_process
condition: (evt.type in (execve, execveat) and evt.dir=<)

- macro: create_symlink
condition: (evt.type in (symlink, symlinkat) and evt.dir=<)

- macro: create_hardlink
condition: (evt.type in (link, linkat) and evt.dir=<)

- macro: kernel_module_load
condition: (evt.type in (init_module, finit_module) and evt.dir=<)

- macro: dup
condition: (evt.type in (dup, dup2, dup3))

# File categories
- macro: etc_dir
condition: (fd.name startswith /etc/)

- list: shell_binaries
items: [ash, bash, csh, ksh, sh, tcsh, zsh, dash]

- macro: shell_procs
condition: (proc.name in (shell_binaries))

# dpkg -L login | grep bin | xargs ls -ld | grep -v '^d' | awk '{print $9}' | xargs -L 1 basename | tr "\\n" ","
- list: login_binaries
items: [
login, systemd, '"(systemd)"', systemd-logind, su,
nologin, faillog, lastlog, newgrp, sg
]

# dpkg -L passwd | grep bin | xargs ls -ld | grep -v '^d' | awk '{print $9}' | xargs -L 1 basename | tr "\\n" ","
- list: passwd_binaries
items: [
shadowconfig, grpck, pwunconv, grpconv, pwck,
groupmod, vipw, pwconv, useradd, newusers, cppw, chpasswd, usermod,
groupadd, groupdel, grpunconv, chgpasswd, userdel, chage, chsh,
gpasswd, chfn, expiry, passwd, vigr, cpgr, adduser, addgroup, deluser, delgroup
]

# repoquery -l shadow-utils | grep bin | xargs ls -ld | grep -v '^d' |
# awk '{print $9}' | xargs -L 1 basename | tr "\\n" ","
- list: shadowutils_binaries
items: [
chage, gpasswd, lastlog, newgrp, sg, adduser, deluser, chpasswd,
groupadd, groupdel, addgroup, delgroup, groupmems, groupmod, grpck, grpconv, grpunconv,
newusers, pwck, pwconv, pwunconv, useradd, userdel, usermod, vigr, vipw, unix_chkpwd
]

- list: http_server_binaries
items: [nginx, httpd, httpd-foregroun, lighttpd, apache, apache2]

- list: db_server_binaries
items: [mysqld, postgres, sqlplus]

- list: postgres_mgmt_binaries
items: [pg_dumpall, pg_ctl, pg_lsclusters, pg_ctlcluster]

- list: nosql_server_binaries
items: [couchdb, memcached, redis-server, rabbitmq-server, mongod]

- list: gitlab_binaries
items: [gitlab-shell, gitlab-mon, gitlab-runner-b, git]

- macro: server_procs
condition: (proc.name in (http_server_binaries, db_server_binaries, docker_binaries, sshd))

# The explicit quotes are needed to avoid the - characters being
# interpreted by the filter expression.
- list: rpm_binaries
items: [dnf, dnf-automatic, rpm, rpmkey, yum, '"75-system-updat"', rhsmcertd-worke, rhsmcertd, subscription-ma,
repoquery, rpmkeys, rpmq, yum-cron, yum-config-mana, yum-debug-dump,
abrt-action-sav, rpmdb_stat, microdnf, rhn_check, yumdb]

- list: deb_binaries
items: [dpkg, dpkg-preconfigu, dpkg-reconfigur, dpkg-divert, apt, apt-get, aptitude,
frontend, preinst, add-apt-reposit, apt-auto-remova, apt-key,
apt-listchanges, unattended-upgr, apt-add-reposit, apt-cache, apt.systemd.dai
]
- list: python_package_managers
items: [pip, pip3, conda]

# The truncated dpkg-preconfigu is intentional, process names are
# truncated at the falcosecurity-libs level.
- list: package_mgmt_binaries
items: [rpm_binaries, deb_binaries, update-alternat, gem, npm, python_package_managers, sane-utils.post, alternatives, chef-client, apk, snapd]

- macro: run_by_package_mgmt_binaries
condition: (proc.aname in (package_mgmt_binaries, needrestart))

# A canonical set of processes that run other programs with different
# privileges or as a different user.
- list: userexec_binaries
items: [sudo, su, suexec, critical-stack, dzdo]

- list: user_mgmt_binaries
items: [login_binaries, passwd_binaries, shadowutils_binaries]

- list: hids_binaries
items: [aide, aide.wrapper, update-aide.con, logcheck, syslog-summary, osqueryd, ossec-syscheckd]

- list: vpn_binaries
items: [openvpn]

- list: nomachine_binaries
items: [nxexec, nxnode.bin, nxserver.bin, nxclient.bin]

- list: mail_binaries
items: [
sendmail, sendmail-msp, postfix, procmail, exim4,
pickup, showq, mailq, dovecot, imap-login, imap,
mailmng-core, pop3-login, dovecot-lda, pop3
]

- list: mail_config_binaries
items: [
update_conf, parse_mc, makemap_hash, newaliases, update_mk, update_tlsm4,
update_db, update_mc, ssmtp.postinst, mailq, postalias, postfix.config.,
postfix.config, postfix-script, postconf
]

- list: sensitive_file_names
items: [/etc/shadow, /etc/sudoers, /etc/pam.conf, /etc/security/pwquality.conf]

- list: sensitive_directory_names
items: [/, /etc, /etc/, /root, /root/]

- macro: sensitive_files
condition: >
((fd.name startswith /etc and fd.name in (sensitive_file_names)) or
fd.directory in (/etc/sudoers.d, /etc/pam.d))

# Indicates that the process is new. Currently detected using time
# since process was started, using a threshold of 5 seconds.
- macro: proc_is_new
condition: (proc.duration <= 5000000000)

# Use this to test whether the event occurred within a container.
# When displaying container information in the output field, use
# %container.info, without any leading term (file=%fd.name
# %container.info user=%user.name user_loginuid=%user.loginuid, and not file=%fd.name
# container=%container.info user=%user.name user_loginuid=%user.loginuid). The output will change
# based on the context and whether or not -pk/-pm/-pc was specified on
# the command line.
- macro: container
condition: (container.id != host)

- macro: interactive
condition: >
((proc.aname=sshd and proc.name != sshd) or
proc.name=systemd-logind or proc.name=login)

- list: cron_binaries
items: [anacron, cron, crond, crontab]

# https://github.com/liske/needrestart
- list: needrestart_binaries
items: [needrestart, 10-dpkg, 20-rpm, 30-pacman]

# Possible scripts run by sshkit
- list: sshkit_script_binaries
items: [10_etc_sudoers., 10_passwd_group]

# System users that should never log into a system. Consider adding your own
# service users (e.g. 'apache' or 'mysqld') here.
- macro: system_users
condition: (user.name in (bin, daemon, games, lp, mail, nobody, sshd, sync, uucp, www-data))

- macro: ansible_running_python
condition: (proc.name in (python, pypy, python3) and proc.cmdline contains ansible)

# Qualys seems to run a variety of shell subprocesses, at various
# levels. This checks at a few levels without the cost of a full
# proc.aname, which traverses the full parent hierarchy.
- macro: run_by_qualys
condition: >
(proc.pname=qualys-cloud-ag or
proc.aname[2]=qualys-cloud-ag or
proc.aname[3]=qualys-cloud-ag or
proc.aname[4]=qualys-cloud-ag)

- macro: run_by_google_accounts_daemon
condition: >
(proc.aname[1] startswith google_accounts or
proc.aname[2] startswith google_accounts or
proc.aname[3] startswith google_accounts)

# Chef is similar.
- macro: run_by_chef
condition: (proc.aname[2]=chef_command_wr or proc.aname[3]=chef_command_wr or
proc.aname[2]=chef-client or proc.aname[3]=chef-client or
proc.name=chef-client)

# Also handles running semi-indirectly via scl
- macro: run_by_foreman
condition: >
(user.name=foreman and
((proc.pname in (rake, ruby, scl) and proc.aname[5] in (tfm-rake,tfm-ruby)) or
(proc.pname=scl and proc.aname[2] in (tfm-rake,tfm-ruby))))

- macro: python_mesos_marathon_scripting
condition: (proc.pcmdline startswith "python3 /marathon-lb/marathon_lb.py")

- macro: splunk_running_forwarder
condition: (proc.pname=splunkd and proc.cmdline startswith "sh -c /opt/splunkforwarder")

- macro: perl_running_plesk
condition: (proc.cmdline startswith "perl /opt/psa/admin/bin/plesk_agent_manager" or
proc.pcmdline startswith "perl /opt/psa/admin/bin/plesk_agent_manager")

- macro: perl_running_updmap
condition: (proc.cmdline startswith "perl /usr/bin/updmap")

- macro: perl_running_centrifydc
condition: (proc.cmdline startswith "perl /usr/share/centrifydc")

- macro: runuser_reading_pam
condition: (proc.name=runuser and fd.directory=/etc/pam.d)

# CIS Linux Benchmark program
- macro: linux_bench_reading_etc_shadow
condition: ((proc.aname[2]=linux-bench and
proc.name in (awk,cut,grep)) and
(fd.name=/etc/shadow or
fd.directory=/etc/pam.d))

- macro: veritas_driver_script
condition: (proc.cmdline startswith "perl /opt/VRTSsfmh/bin/mh_driver.pl")

- macro: user_ssh_directory
condition: (fd.name contains '/.ssh/' and fd.name glob '/home/*/.ssh/*')

- macro: directory_traversal
condition: (fd.nameraw contains '../' and fd.nameraw glob '*../*../*')

# ******************************************************************************
# * "Directory traversal monitored file read" requires FALCO_ENGINE_VERSION 13 *
# ******************************************************************************
- rule: Directory traversal monitored file read
desc: >
Web applications can be vulnerable to directory traversal attacks that allow accessing files outside of the web app's root directory
(e.g. Arbitrary File Read bugs). System directories like /etc are typically accessed via absolute paths. Access patterns outside of this
(here path traversal) can be regarded as suspicious. This rule includes failed file open attempts.
condition: >
(open_read or open_file_failed)
and (etc_dir or user_ssh_directory or
fd.name startswith /root/.ssh or
fd.name contains "id_rsa")
and directory_traversal
and not proc.pname in (shell_binaries)
enabled: true
output: Read monitored file via directory traversal (file=%fd.name fileraw=%fd.nameraw gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4] evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)
priority: WARNING
tags: [maturity_stable, host, container, filesystem, mitre_credential_access, T1555]

- macro: cmp_cp_by_passwd
condition: (proc.name in (cmp, cp) and proc.pname in (passwd, run-parts))

- macro: user_known_read_sensitive_files_activities
condition: (never_true)

- rule: Read sensitive file trusted after startup
desc: >
An attempt to read any sensitive file (e.g. files containing user/password/authentication
information) by a trusted program after startup. Trusted programs might read these files
at startup to load initial state, but not afterwards. Can be customized as needed.
In modern containerized cloud infrastructures, accessing traditional Linux sensitive files
might be less relevant, yet it remains valuable for baseline detections. While we provide additional
rules for SSH or cloud vendor-specific credentials, you can significantly enhance your security
program by crafting custom rules for critical application credentials unique to your environment.
condition: >
open_read
and sensitive_files
and server_procs
and not proc_is_new
and proc.name!="sshd"
and not user_known_read_sensitive_files_activities
output: Sensitive file opened for reading by trusted program after startup (file=%fd.name pcmdline=%proc.pcmdline gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4] evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)
priority: WARNING
tags: [maturity_stable, host, container, filesystem, mitre_credential_access, T1555]

- list: read_sensitive_file_binaries
items: [
iptables, ps, lsb_release, check-new-relea, dumpe2fs, accounts-daemon, sshd,
vsftpd, systemd, mysql_install_d, psql, screen, debconf-show, sa-update,
pam-auth-update, pam-config, /usr/sbin/spamd, polkit-agent-he, lsattr, file, sosreport,
scxcimservera, adclient, rtvscand, cockpit-session, userhelper, ossec-syscheckd
]

# Add conditions to this macro (probably in a separate file,
# overwriting this macro) to allow for specific combinations of
# programs accessing sensitive files.
# fluentd_writing_conf_files is a good example to follow, as it
# specifies both the program doing the writing as well as the specific
# files it is allowed to modify.
#
# In this file, it just takes one of the macros in the base rule
# and repeats it.
- macro: user_read_sensitive_file_conditions
condition: cmp_cp_by_passwd

- list: read_sensitive_file_images
items: []

- macro: user_read_sensitive_file_containers
condition: (container and container.image.repository in (read_sensitive_file_images))

# This macro detects man-db postinst, see https://salsa.debian.org/debian/man-db/-/blob/master/debian/postinst
# The rule "Read sensitive file untrusted" use this macro to avoid FPs.
- macro: mandb_postinst
condition: >
(proc.name=perl and proc.args startswith "-e" and
proc.args contains "@pwd = getpwnam(" and
proc.args contains "exec " and
proc.args contains "/usr/bin/mandb")

- rule: Read sensitive file untrusted
desc: >
An attempt to read any sensitive file (e.g. files containing user/password/authentication
information). Exceptions are made for known trusted programs. Can be customized as needed.
In modern containerized cloud infrastructures, accessing traditional Linux sensitive files
might be less relevant, yet it remains valuable for baseline detections. While we provide additional
rules for SSH or cloud vendor-specific credentials, you can significantly enhance your security
program by crafting custom rules for critical application credentials unique to your environment.
condition: >
open_read
and sensitive_files
and proc_name_exists
and not proc.name in (user_mgmt_binaries, userexec_binaries, package_mgmt_binaries,
cron_binaries, read_sensitive_file_binaries, shell_binaries, hids_binaries,
vpn_binaries, mail_config_binaries, nomachine_binaries, sshkit_script_binaries,
in.proftpd, mandb, salt-call, salt-minion, postgres_mgmt_binaries,
google_oslogin_
)
and not cmp_cp_by_passwd
and not ansible_running_python
and not run_by_qualys
and not run_by_chef
and not run_by_google_accounts_daemon
and not user_read_sensitive_file_conditions
and not mandb_postinst
and not perl_running_plesk
and not perl_running_updmap
and not veritas_driver_script
and not perl_running_centrifydc
and not runuser_reading_pam
and not linux_bench_reading_etc_shadow
and not user_known_read_sensitive_files_activities
and not user_read_sensitive_file_containers
output: Sensitive file opened for reading by non-trusted program (file=%fd.name gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4] evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)
priority: WARNING
tags: [maturity_stable, host, container, filesystem, mitre_credential_access, T1555]

- macro: postgres_running_wal_e
condition: (proc.pname=postgres and (proc.cmdline startswith "sh -c envdir /etc/wal-e.d/env /usr/local/bin/wal-e" or proc.cmdline startswith "sh -c envdir \"/run/etc/wal-e.d/env\" wal-g wal-push"))

- macro: redis_running_prepost_scripts
condition: (proc.aname[2]=redis-server and (proc.cmdline contains "redis-server.post-up.d" or proc.cmdline contains "redis-server.pre-up.d"))

- macro: rabbitmq_running_scripts
condition: >
(proc.pname=beam.smp and
(proc.cmdline startswith "sh -c exec ps" or
proc.cmdline startswith "sh -c exec inet_gethost" or
proc.cmdline= "sh -s unix:cmd" or
proc.cmdline= "sh -c exec /bin/sh -s unix:cmd 2>&1"))

- macro: rabbitmqctl_running_scripts
condition: (proc.aname[2]=rabbitmqctl and proc.cmdline startswith "sh -c ")

- macro: run_by_appdynamics
condition: (proc.pexe endswith java and proc.pcmdline contains " -jar -Dappdynamics")

# The binaries in this list and their descendents are *not* allowed
# spawn shells. This includes the binaries spawning shells directly as
# well as indirectly. For example, apache -> php/perl for
# mod_{php,perl} -> some shell is also not allowed, because the shell
# has apache as an ancestor.
- list: protected_shell_spawning_binaries
items: [
http_server_binaries, db_server_binaries, nosql_server_binaries, mail_binaries,
fluentd, flanneld, splunkd, consul, smbd, runsv, PM2
]

- macro: parent_java_running_zookeeper
condition: (proc.pexe endswith java and proc.pcmdline contains org.apache.zookeeper.server)

- macro: parent_java_running_kafka
condition: (proc.pexe endswith java and proc.pcmdline contains kafka.Kafka)

- macro: parent_java_running_elasticsearch
condition: (proc.pexe endswith java and proc.pcmdline contains org.elasticsearch.bootstrap.Elasticsearch)

- macro: parent_java_running_activemq
condition: (proc.pexe endswith java and proc.pcmdline contains activemq.jar)

- macro: parent_java_running_cassandra
condition: (proc.pexe endswith java and (proc.pcmdline contains "-Dcassandra.config.loader" or proc.pcmdline contains org.apache.cassandra.service.CassandraDaemon))

- macro: parent_java_running_jboss_wildfly
condition: (proc.pexe endswith java and proc.pcmdline contains org.jboss)

- macro: parent_java_running_glassfish
condition: (proc.pexe endswith java and proc.pcmdline contains com.sun.enterprise.glassfish)

- macro: parent_java_running_hadoop
condition: (proc.pexe endswith java and proc.pcmdline contains org.apache.hadoop)

- macro: parent_java_running_datastax
condition: (proc.pexe endswith java and proc.pcmdline contains com.datastax)

- macro: nginx_starting_nginx
condition: (proc.pname=nginx and proc.cmdline contains "/usr/sbin/nginx -c /etc/nginx/nginx.conf")

- macro: nginx_running_aws_s3_cp
condition: (proc.pname=nginx and proc.cmdline startswith "sh -c /usr/local/bin/aws s3 cp")

- macro: consul_running_net_scripts
condition: (proc.pname=consul and (proc.cmdline startswith "sh -c curl" or proc.cmdline startswith "sh -c nc"))

- macro: consul_running_alert_checks
condition: (proc.pname=consul and proc.cmdline startswith "sh -c /bin/consul-alerts")

- macro: serf_script
condition: (proc.cmdline startswith "sh -c serf")

- macro: check_process_status
condition: (proc.cmdline startswith "sh -c kill -0 ")

# In some cases, you may want to consider node processes run directly
# in containers as protected shell spawners. Examples include using
# pm2-docker or pm2 start some-app.js --no-daemon-mode as the direct
# entrypoint of the container, and when the node app is a long-lived
# server using something like express.
#
# However, there are other uses of node related to build pipelines for
# which node is not really a server but instead a general scripting
# tool. In these cases, shells are very likely and in these cases you
# don't want to consider node processes protected shell spawners.
#
# We have to choose one of these cases, so we consider node processes
# as unprotected by default. If you want to consider any node process
# run in a container as a protected shell spawner, override the below
# macro to remove the "never_true" clause, which allows it to take effect.
- macro: possibly_node_in_container
condition: (never_true and (proc.pname=node and proc.aname[3]=docker-containe))

# Similarly, you may want to consider any shell spawned by apache
# tomcat as suspect. The famous apache struts attack (CVE-2017-5638)
# could be exploited to do things like spawn shells.
#
# However, many applications *do* use tomcat to run arbitrary shells,
# as a part of build pipelines, etc.
#
# Like for node, we make this case opt-in.
- macro: possibly_parent_java_running_tomcat
condition: (never_true and proc.pexe endswith java and proc.pcmdline contains org.apache.catalina.startup.Bootstrap)

- macro: protected_shell_spawner
condition: >
(proc.aname in (protected_shell_spawning_binaries)
or parent_java_running_zookeeper
or parent_java_running_kafka
or parent_java_running_elasticsearch
or parent_java_running_activemq
or parent_java_running_cassandra
or parent_java_running_jboss_wildfly
or parent_java_running_glassfish
or parent_java_running_hadoop
or parent_java_running_datastax
or possibly_parent_java_running_tomcat
or possibly_node_in_container)

- list: mesos_shell_binaries
items: [mesos-docker-ex, mesos-slave, mesos-health-ch]

# Note that runsv is both in protected_shell_spawner and the
# exclusions by pname. This means that runsv can itself spawn shells
# (the ./run and ./finish scripts), but the processes runsv can not
# spawn shells.
- rule: Run shell untrusted
desc: >
An attempt to spawn a shell below a non-shell application. The non-shell applications that are monitored are
defined in the protected_shell_spawner macro, with protected_shell_spawning_binaries being the list you can
easily customize. For Java parent processes, please note that Java often has a custom process name. Therefore,
rely more on proc.exe to define Java applications. This rule can be noisier, as you can see in the exhaustive
existing tuning. However, given it is very behavior-driven and broad, it is universally relevant to catch
general Remote Code Execution (RCE). Allocate time to tune this rule for your use cases and reduce noise.
Tuning suggestions include looking at the duration of the parent process (proc.ppid.duration) to define your
long-running app processes. Checking for newer fields such as proc.vpgid.name and proc.vpgid.exe instead of the
direct parent process being a non-shell application could make the rule more robust.
condition: >
spawned_process
and shell_procs
and proc.pname exists
and protected_shell_spawner
and not proc.pname in (shell_binaries, gitlab_binaries, cron_binaries, user_known_shell_spawn_binaries,
needrestart_binaries,
mesos_shell_binaries,
erl_child_setup, exechealthz,
PM2, PassengerWatchd, c_rehash, svlogd, logrotate, hhvm, serf,
lb-controller, nvidia-installe, runsv, statsite, erlexec, calico-node,
"puma reactor")
and not proc.cmdline in (known_shell_spawn_cmdlines)
and not proc.aname in (unicorn_launche)
and not consul_running_net_scripts
and not consul_running_alert_checks
and not nginx_starting_nginx
and not nginx_running_aws_s3_cp
and not run_by_package_mgmt_binaries
and not serf_script
and not check_process_status
and not run_by_foreman
and not python_mesos_marathon_scripting
and not splunk_running_forwarder
and not postgres_running_wal_e
and not redis_running_prepost_scripts
and not rabbitmq_running_scripts
and not rabbitmqctl_running_scripts
and not run_by_appdynamics
and not user_shell_container_exclusions
output: Shell spawned by untrusted binary (parent_exe=%proc.pexe parent_exepath=%proc.pexepath pcmdline=%proc.pcmdline gparent=%proc.aname[2] ggparent=%proc.aname[3] aname[4]=%proc.aname[4] aname[5]=%proc.aname[5] aname[6]=%proc.aname[6] aname[7]=%proc.aname[7] evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty exe_flags=%evt.arg.flags %container.info)
priority: NOTICE
tags: [maturity_stable, host, container, process, shell, mitre_execution, T1059.004]

# These images are allowed both to run with --privileged and to mount
# sensitive paths from the host filesystem.
#
# NOTE: This list is only provided for backwards compatibility with
# older local falco rules files that may have been appending to
# trusted_images. To make customizations, it's better to add images to
# either privileged_images or falco_sensitive_mount_images.
- list: trusted_images
items: []

- list: sematext_images
items: [docker.io/sematext/sematext-agent-docker, docker.io/sematext/agent, docker.io/sematext/logagent,
registry.access.redhat.com/sematext/sematext-agent-docker,
registry.access.redhat.com/sematext/agent,
registry.access.redhat.com/sematext/logagent]

# Falco containers
- list: falco_containers
items:
- falcosecurity/falco
- docker.io/falcosecurity/falco
- public.ecr.aws/falcosecurity/falco

# Falco no driver containers
- list: falco_no_driver_containers
items:
- falcosecurity/falco-no-driver
- docker.io/falcosecurity/falco-no-driver
- public.ecr.aws/falcosecurity/falco-no-driver

# These container images are allowed to run with --privileged and full set of capabilities
- list: falco_privileged_images
items: [
falco_containers,
docker.io/calico/node,
calico/node,
docker.io/cloudnativelabs/kube-router,
docker.io/docker/ucp-agent,
docker.io/mesosphere/mesos-slave,
docker.io/rook/toolbox,
docker.io/sysdig/sysdig,
gcr.io/google_containers/kube-proxy,
gcr.io/google-containers/startup-script,
gcr.io/projectcalico-org/node,
gke.gcr.io/kube-proxy,
gke.gcr.io/gke-metadata-server,
gke.gcr.io/netd-amd64,
gke.gcr.io/watcher-daemonset,
gcr.io/google-containers/prometheus-to-sd,
registry.k8s.io/ip-masq-agent-amd64,
registry.k8s.io/kube-proxy,
registry.k8s.io/prometheus-to-sd,
quay.io/calico/node,
sysdig/sysdig,
sematext_images,
registry.k8s.io/dns/k8s-dns-node-cache,
mcr.microsoft.com/oss/kubernetes/kube-proxy
]

# The steps libcontainer performs to set up the root program for a container are:
# - clone + exec self to a program runc:[0:PARENT]
# - clone a program runc:[1:CHILD] which sets up all the namespaces
# - clone a second program runc:[2:INIT] + exec to the root program.
# The parent of runc:[2:INIT] is runc:0:PARENT]
# As soon as 1:CHILD is created, 0:PARENT exits, so there's a race
# where at the time 2:INIT execs the root program, 0:PARENT might have
# already exited, or might still be around. So we handle both.
# We also let runc:[1:CHILD] count as the parent process, which can occur
# when we lose events and lose track of state.
- macro: container_entrypoint
condition: (not proc.pname exists or proc.pname in (runc:[0:PARENT], runc:[1:CHILD], runc, docker-runc, exe, docker-runc-cur, containerd-shim, systemd, crio))

- macro: user_known_system_user_login
condition: (never_true)

# Anything run interactively by root
# - condition: evt.type != switch and user.name = root and proc.name != sshd and interactive
# output: "Interactive root (%user.name %proc.name %evt.dir %evt.type %evt.args %fd.name)"
# priority: WARNING
- rule: System user interactive
desc: >
System (e.g. non-login) users spawning new processes. Can add custom service users (e.g. apache or mysqld).
'Interactive' is defined as new processes as descendants of an ssh session or login process. Consider further tuning
by only looking at processes in a terminal / tty (proc.tty != 0). A newer field proc.is_vpgid_leader could be of help
to distinguish if the process was "directly" executed, for instance, in a tty, or executed as a descendant process in the
same process group, which, for example, is the case when subprocesses are spawned from a script. Consider this rule
as a great template rule to monitor interactive accesses to your systems more broadly. However, such a custom rule would be
unique to your environment. The rule "Terminal shell in container" that fires when using "kubectl exec" is more Kubernetes
relevant, whereas this one could be more interesting for the underlying host.
condition: >
spawned_process
and system_users
and interactive
and not user_known_system_user_login
output: System user ran an interactive command (evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty exe_flags=%evt.arg.flags %container.info)
priority: INFO
tags: [maturity_stable, host, container, users, mitre_execution, T1059, NIST_800-53_AC-2]

# In some cases, a shell is expected to be run in a container. For example, configuration
# management software may do this, which is expected.
- macro: user_expected_terminal_shell_in_container_conditions
condition: (never_true)

- rule: Terminal shell in container
desc: >
A shell was used as the entrypoint/exec point into a container with an attached terminal. Parent process may have
legitimately already exited and be null (read container_entrypoint macro). Common when using "kubectl exec" in Kubernetes.
Correlate with k8saudit exec logs if possible to find user or serviceaccount token used (fuzzy correlation by namespace and pod name).
Rather than considering it a standalone rule, it may be best used as generic auditing rule while examining other triggered
rules in this container/tty.
condition: >
spawned_process
and container
and shell_procs
and proc.tty != 0
and container_entrypoint
and not user_expected_terminal_shell_in_container_conditions
output: A shell was spawned in a container with an attached terminal (evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty exe_flags=%evt.arg.flags %container.info)
priority: NOTICE
tags: [maturity_stable, container, shell, mitre_execution, T1059]

# For some container types (mesos), there isn't a container image to
# work with, and the container name is autogenerated, so there isn't
# any stable aspect of the software to work with. In this case, we
# fall back to allowing certain command lines.
- list: known_shell_spawn_cmdlines
items: [
'"sh -c uname -p 2> /dev/null"',
'"sh -c uname -s 2>&1"',
'"sh -c uname -r 2>&1"',
'"sh -c uname -v 2>&1"',
'"sh -c uname -a 2>&1"',
'"sh -c ruby -v 2>&1"',
'"sh -c getconf CLK_TCK"',
'"sh -c getconf PAGESIZE"',
'"sh -c LC_ALL=C LANG=C /sbin/ldconfig -p 2>/dev/null"',
'"sh -c LANG=C /sbin/ldconfig -p 2>/dev/null"',
'"sh -c /sbin/ldconfig -p 2>/dev/null"',
'"sh -c stty -a 2>/dev/null"',
'"sh -c stty -a < /dev/tty"',
'"sh -c stty -g < /dev/tty"',
'"sh -c node index.js"',
'"sh -c node index"',
'"sh -c node ./src/start.js"',
'"sh -c node app.js"',
'"sh -c node -e \"require(''nan'')\""',
'"sh -c node -e \"require(''nan'')\")"',
'"sh -c node $NODE_DEBUG_OPTION index.js "',
'"sh -c crontab -l 2"',
'"sh -c lsb_release -a"',
'"sh -c lsb_release -is 2>/dev/null"',
'"sh -c whoami"',
'"sh -c node_modules/.bin/bower-installer"',
'"sh -c /bin/hostname -f 2> /dev/null"',
'"sh -c locale -a"',
'"sh -c -t -i"',
'"sh -c openssl version"',
'"bash -c id -Gn kafadmin"',
'"sh -c /bin/sh -c ''date +%%s''"',
'"sh -c /usr/share/lighttpd/create-mime.conf.pl"'
]

# This list allows for easy additions to the set of commands allowed
# to run shells in containers without having to without having to copy
# and override the entire run shell in container macro. Once
# https://github.com/falcosecurity/falco/issues/255 is fixed this will be a
# bit easier, as someone could append of any of the existing lists.
- list: user_known_shell_spawn_binaries
items: []

# This macro allows for easy additions to the set of commands allowed
# to run shells in containers without having to override the entire
# rule. Its default value is an expression that always is false, which
# becomes true when the "not ..." in the rule is applied.
- macro: user_shell_container_exclusions
condition: (never_true)

# Containers from IBM Cloud
- list: ibm_cloud_containers
items:
- icr.io/ext/sysdig/agent
- registry.ng.bluemix.net/armada-master/metrics-server-amd64
- registry.ng.bluemix.net/armada-master/olm

# In a local/user rules file, list the namespace or container images that are
# allowed to contact the K8s API Server from within a container. This
# might cover cases where the K8s infrastructure itself is running
# within a container.
- macro: k8s_containers
condition: >
(container.image.repository in (gcr.io/google_containers/hyperkube-amd64,
gcr.io/google_containers/kube2sky,
docker.io/sysdig/sysdig, sysdig/sysdig,
fluent/fluentd-kubernetes-daemonset, prom/prometheus,
falco_containers,
falco_no_driver_containers,
ibm_cloud_containers,
velero/velero,
quay.io/jetstack/cert-manager-cainjector, weaveworks/kured,
quay.io/prometheus-operator/prometheus-operator,
registry.k8s.io/ingress-nginx/kube-webhook-certgen, quay.io/spotahome/redis-operator,
registry.opensource.zalan.do/acid/postgres-operator, registry.opensource.zalan.do/acid/postgres-operator-ui,
rabbitmqoperator/cluster-operator, quay.io/kubecost1/kubecost-cost-model,
docker.io/bitnami/prometheus, docker.io/bitnami/kube-state-metrics, mcr.microsoft.com/oss/azure/aad-pod-identity/nmi)
or (k8s.ns.name = "kube-system"))

- macro: k8s_api_server
condition: (fd.sip.name="kubernetes.default.svc.cluster.local")

- macro: user_known_contact_k8s_api_server_activities
condition: (never_true)

- rule: Contact K8S API Server From Container
desc: >
Detect attempts to communicate with the K8S API Server from a container by non-profiled users. Kubernetes APIs play a
pivotal role in configuring the cluster management lifecycle. Detecting potential unauthorized access to the API server
is of utmost importance. Audit your complete infrastructure and pinpoint any potential machines from which the API server
might be accessible based on your network layout. If Falco can't operate on all these machines, consider analyzing the
Kubernetes audit logs (typically drained from control nodes, and Falco offers a k8saudit plugin) as an additional data
source for detections within the control plane.
condition: >
evt.type=connect and evt.dir=<
and (fd.typechar=4 or fd.typechar=6)
and container
and k8s_api_server
and not k8s_containers
and not user_known_contact_k8s_api_server_activities
output: Unexpected connection to K8s API Server from container (connection=%fd.name lport=%fd.lport rport=%fd.rport fd_type=%fd.type fd_proto=fd.l4proto evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)
priority: NOTICE
tags: [maturity_stable, container, network, k8s, mitre_discovery, T1565]

- rule: Netcat Remote Code Execution in Container
desc: >
Netcat Program runs inside container that allows remote code execution and may be utilized
as a part of a variety of reverse shell payload https://github.com/swisskyrepo/PayloadsAllTheThings/.
These programs are of higher relevance as they are commonly installed on UNIX-like operating systems.
Can fire in combination with the "Redirect STDOUT/STDIN to Network Connection in Container"
rule as it utilizes a different evt.type.
condition: >
spawned_process
and container
and ((proc.name = "nc" and (proc.cmdline contains " -e" or
proc.cmdline contains " -c")) or
(proc.name = "ncat" and (proc.args contains "--sh-exec" or
proc.args contains "--exec" or proc.args contains "-e " or
proc.args contains "-c " or proc.args contains "--lua-exec"))
)
output: Netcat runs inside container that allows remote code execution (evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty exe_flags=%evt.arg.flags %container.info)
priority: WARNING
tags: [maturity_stable, container, network, process, mitre_execution, T1059]

- list: grep_binaries
items: [grep, egrep, fgrep]

- macro: grep_commands
condition: (proc.name in (grep_binaries))

# a less restrictive search for things that might be passwords/ssh/user etc.
- macro: grep_more
condition: (never_true)

- macro: private_key_or_password
condition: >
(proc.args icontains "BEGIN PRIVATE" or
proc.args icontains "BEGIN OPENSSH PRIVATE" or
proc.args icontains "BEGIN RSA PRIVATE" or
proc.args icontains "BEGIN DSA PRIVATE" or
proc.args icontains "BEGIN EC PRIVATE" or
(grep_more and
(proc.args icontains " pass " or
proc.args icontains " ssh " or
proc.args icontains " user "))
)

- rule: Search Private Keys or Passwords
desc: >
Detect attempts to search for private keys or passwords using the grep or find command. This is often seen with
unsophisticated attackers, as there are many ways to access files using bash built-ins that could go unnoticed.
Regardless, this serves as a solid baseline detection that can be tailored to cover these gaps while maintaining
an acceptable noise level.
condition: >
spawned_process
and ((grep_commands and private_key_or_password) or
(proc.name = "find" and (proc.args contains "id_rsa" or
proc.args contains "id_dsa" or
proc.args contains "id_ed25519" or
proc.args contains "id_ecdsa"
)
))
output: Grep private keys or passwords activities found (evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty exe_flags=%evt.arg.flags %container.info)
priority:
WARNING
tags: [maturity_stable, host, container, process, filesystem, mitre_credential_access, T1552.001]

- list: log_directories
items: [/var/log, /dev/log]

- list: log_files
items: [syslog, auth.log, secure, kern.log, cron, user.log, dpkg.log, last.log, yum.log, access_log, mysql.log, mysqld.log]

- macro: access_log_files
condition: (fd.directory in (log_directories) or fd.filename in (log_files))

# a placeholder for whitelist log files that could be cleared. Recommend the macro as (fd.name startswith "/var/log/app1*")
- macro: allowed_clear_log_files
condition: (never_true)

- macro: trusted_logging_images
condition: (container.image.repository endswith "splunk/fluentd-hec" or
container.image.repository endswith "fluent/fluentd-kubernetes-daemonset" or
container.image.repository endswith "openshift3/ose-logging-fluentd" or
container.image.repository endswith "containernetworking/azure-npm")

- macro: containerd_activities
condition: (proc.name=containerd and (fd.name startswith "/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/" or
fd.name startswith "/var/lib/containerd/tmpmounts/"))

- rule: Clear Log Activities
desc: >
Detect clearing of critical access log files, typically done to erase evidence that could be attributed to an adversary's
actions. To effectively customize and operationalize this detection, check for potentially missing log file destinations
relevant to your environment, and adjust the profiled containers you wish not to be alerted on.
condition: >
open_write
and access_log_files
and evt.arg.flags contains "O_TRUNC"
and not containerd_activities
and not trusted_logging_images
and not allowed_clear_log_files
output: Log files were tampered (file=%fd.name evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)
priority:
WARNING
tags: [maturity_stable, host, container, filesystem, mitre_defense_evasion, T1070, NIST_800-53_AU-10]

- list: data_remove_commands
items: [shred, mkfs, mke2fs]

- macro: clear_data_procs
condition: (proc.name in (data_remove_commands))

- macro: user_known_remove_data_activities
condition: (never_true)

- rule: Remove Bulk Data from Disk
desc: >
Detect a process running to clear bulk data from disk with the intention to destroy data, possibly interrupting availability
to systems. Profile your environment and use user_known_remove_data_activities to tune this rule.
condition: >
spawned_process
and clear_data_procs
and not user_known_remove_data_activities
output: Bulk data has been removed from disk (file=%fd.name evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty exe_flags=%evt.arg.flags %container.info)
priority:
WARNING
tags: [maturity_stable, host, container, process, filesystem, mitre_impact, T1485]

- rule: Create Symlink Over Sensitive Files
desc: >
Detect symlinks created over a curated list of sensitive files or subdirectories under /etc/ or
root directories. Can be customized as needed. Refer to further and equivalent guidance within the
rule "Read sensitive file untrusted".
condition: >
create_symlink
and (evt.arg.target in (sensitive_file_names) or evt.arg.target in (sensitive_directory_names))
output: Symlinks created over sensitive files (target=%evt.arg.target linkpath=%evt.arg.linkpath evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)
priority: WARNING
tags: [maturity_stable, host, container, filesystem, mitre_credential_access, T1555]

- rule: Create Hardlink Over Sensitive Files
desc: >
Detect hardlink created over a curated list of sensitive files or subdirectories under /etc/ or
root directories. Can be customized as needed. Refer to further and equivalent guidance within the
rule "Read sensitive file untrusted".
condition: >
create_hardlink
and (evt.arg.oldpath in (sensitive_file_names))
output: Hardlinks created over sensitive files (target=%evt.arg.oldpath linkpath=%evt.arg.newpath evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)
priority: WARNING
tags: [maturity_stable, host, container, filesystem, mitre_credential_access, T1555]

- list: user_known_packet_socket_binaries
items: []

- rule: Packet socket created in container
desc: >
Detect new packet socket at the device driver (OSI Layer 2) level in a container. Packet socket could be used for ARP Spoofing
and privilege escalation (CVE-2020-14386) by an attacker. Noise can be reduced by using the user_known_packet_socket_binaries
template list.
condition: >
evt.type=socket and evt.dir=>
and container
and evt.arg.domain contains AF_PACKET
and not proc.name in (user_known_packet_socket_binaries)
output: Packet socket was created in a container (socket_info=%evt.args connection=%fd.name lport=%fd.lport rport=%fd.rport fd_type=%fd.type fd_proto=fd.l4proto evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)
priority: NOTICE
tags: [maturity_stable, container, network, mitre_credential_access, T1557.002]

- macro: user_known_stand_streams_redirect_activities
condition: (never_true)

# As of engine version 20 this rule can be improved by using the fd.types[]
# field so it only triggers once when all three of std{out,err,in} are
# redirected.
#
# - list: ip_sockets
# items: ["ipv4", "ipv6"]
#
# - rule: Redirect STDOUT/STDIN to Network Connection in Container once
# condition: dup and container and evt.rawres in (0, 1, 2) and fd.type in (ip_sockets) and fd.types[0] in (ip_sockets) and fd.types[1] in (ip_sockets) and fd.types[2] in (ip_sockets) and not user_known_stand_streams_redirect_activities
#
# The following rule has not been changed by default as existing users could be
# relying on the rule triggering when any of std{out,err,in} are redirected.
- rule: Redirect STDOUT/STDIN to Network Connection in Container
desc: >
Detect redirection of stdout/stdin to a network connection within a container, achieved by utilizing a
variant of the dup syscall (potential reverse shell or remote code execution
https://github.com/swisskyrepo/PayloadsAllTheThings/). This detection is behavior-based and may generate
noise in the system, and can be adjusted using the user_known_stand_streams_redirect_activities template
macro. Tuning can be performed similarly to existing detections based on process lineage or container images,
and/or it can be limited to interactive tty (tty != 0).
condition: >
dup
and container
and evt.rawres in (0, 1, 2)
and fd.type in ("ipv4", "ipv6")
and not user_known_stand_streams_redirect_activities
output: Redirect stdout/stdin to network connection (gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4] fd.sip=%fd.sip connection=%fd.name lport=%fd.lport rport=%fd.rport fd_type=%fd.type fd_proto=fd.l4proto evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)
priority: NOTICE
tags: [maturity_stable, container, network, process, mitre_execution, T1059]

- list: allowed_container_images_loading_kernel_module
items: []

- rule: Linux Kernel Module Injection Detected
desc: >
Inject Linux Kernel Modules from containers using insmod or modprobe with init_module and finit_module
syscalls, given the precondition of sys_module effective capabilities. Profile the environment and consider
allowed_container_images_loading_kernel_module to reduce noise and account for legitimate cases.
condition: >
kernel_module_load
and container
and thread.cap_effective icontains sys_module
and not container.image.repository in (allowed_container_images_loading_kernel_module)
output: Linux Kernel Module injection from container (parent_exepath=%proc.pexepath gparent=%proc.aname[2] gexepath=%proc.aexepath[2] module=%proc.args res=%evt.res evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)
priority: WARNING
tags: [maturity_stable, host, container, process, mitre_persistence, TA0003]

- rule: Debugfs Launched in Privileged Container
desc: >
Detect file system debugger debugfs launched inside a privileged container which might lead to container escape.
This rule has a more narrow scope.
condition: >
spawned_process
and container
and container.privileged=true
and proc.name=debugfs
output: Debugfs launched started in a privileged container (evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty exe_flags=%evt.arg.flags %container.info)
priority: WARNING
tags: [maturity_stable, container, cis, process, mitre_privilege_escalation, T1611]

- rule: Detect release_agent File Container Escapes
desc: >
Detect an attempt to exploit a container escape using release_agent file.
By running a container with certains capabilities, a privileged user can modify
release_agent file and escape from the container.
condition: >
open_write
and container
and fd.name endswith release_agent
and (user.uid=0 or thread.cap_effective contains CAP_DAC_OVERRIDE)
and thread.cap_effective contains CAP_SYS_ADMIN
output: Detect an attempt to exploit a container escape using release_agent file (file=%fd.name cap_effective=%thread.cap_effective evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)
priority: CRITICAL
tags: [maturity_stable, container, process, mitre_privilege_escalation, T1611]

- list: docker_binaries
items: [docker, dockerd, containerd-shim, "runc:[1:CHILD]", pause, exe, docker-compose, docker-entrypoi, docker-runc-cur, docker-current, dockerd-current]

- list: known_ptrace_binaries
items: []

- macro: known_ptrace_procs
condition: (proc.name in (known_ptrace_binaries))

- macro: ptrace_attach_or_injection
condition: >
(evt.type=ptrace and evt.dir=> and
(evt.arg.request contains PTRACE_POKETEXT or
evt.arg.request contains PTRACE_POKEDATA or
evt.arg.request contains PTRACE_ATTACH or
evt.arg.request contains PTRACE_SEIZE or
evt.arg.request contains PTRACE_SETREGS))

- rule: PTRACE attached to process
desc: >
Detect an attempt to inject potentially malicious code into a process using PTRACE in order to evade
process-based defenses or elevate privileges. Common anti-patterns are debuggers. Additionally, profiling
your environment via the known_ptrace_procs template macro can reduce noise.
A successful ptrace syscall generates multiple logs at once.
condition: >
ptrace_attach_or_injection
and proc_name_exists
and not known_ptrace_procs
output: Detected ptrace PTRACE_ATTACH attempt (proc_pcmdline=%proc.pcmdline evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)
priority: WARNING
tags: [maturity_stable, host, container, process, mitre_privilege_escalation, T1055.008]

- rule: PTRACE anti-debug attempt
desc: >
Detect usage of the PTRACE system call with the PTRACE_TRACEME argument, indicating a program actively attempting
to avoid debuggers attaching to the process. This behavior is typically indicative of malware activity.
Read more about PTRACE in the "PTRACE attached to process" rule.
condition: >
evt.type=ptrace and evt.dir=>
and evt.arg.request contains PTRACE_TRACEME
and proc_name_exists
output: Detected potential PTRACE_TRACEME anti-debug attempt (proc_pcmdline=%proc.pcmdline evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)
priority: NOTICE
tags: [maturity_stable, host, container, process, mitre_defense_evasion, T1622]

- macro: private_aws_credentials
condition: >
(proc.args icontains "aws_access_key_id" or
proc.args icontains "aws_secret_access_key" or
proc.args icontains "aws_session_token" or
proc.args icontains "accesskeyid" or
proc.args icontains "secretaccesskey")

- rule: Find AWS Credentials
desc: >
Detect attempts to search for private keys or passwords using the grep or find command, particularly targeting standard
AWS credential locations. This is often seen with unsophisticated attackers, as there are many ways to access files
using bash built-ins that could go unnoticed. Regardless, this serves as a solid baseline detection that can be tailored
to cover these gaps while maintaining an acceptable noise level. This rule complements the rule "Search Private Keys or Passwords".
condition: >
spawned_process
and ((grep_commands and private_aws_credentials) or
(proc.name = "find" and proc.args endswith ".aws/credentials"))
output: Detected AWS credentials search activity (proc_pcmdline=%proc.pcmdline proc_cwd=%proc.cwd group_gid=%group.gid group_name=%group.name user_loginname=%user.loginname evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty exe_flags=%evt.arg.flags %container.info)
priority: WARNING
tags: [maturity_stable, host, container, process, aws, mitre_credential_access, T1552]

- rule: Execution from /dev/shm
desc: >
This rule detects file execution in the /dev/shm directory, a tactic often used by threat actors to store their readable, writable, and
occasionally executable files. /dev/shm acts as a link to the host or other containers, creating vulnerabilities for their compromise
as well. Notably, /dev/shm remains unchanged even after a container restart. Consider this rule alongside the newer
"Drop and execute new binary in container" rule.
condition: >
spawned_process
and (proc.exe startswith "/dev/shm/" or
(proc.cwd startswith "/dev/shm/" and proc.exe startswith "./" ) or
(shell_procs and proc.args startswith "-c /dev/shm") or
(shell_procs and proc.args startswith "-i /dev/shm") or
(shell_procs and proc.args startswith "/dev/shm") or
(proc.cwd startswith "/dev/shm/" and proc.args startswith "./" ))
and not container.image.repository in (falco_privileged_images, trusted_images)
output: File execution detected from /dev/shm (evt_res=%evt.res file=%fd.name proc_cwd=%proc.cwd proc_pcmdline=%proc.pcmdline user_loginname=%user.loginname group_gid=%group.gid group_name=%group.name evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty exe_flags=%evt.arg.flags %container.info)
priority: WARNING
tags: [maturity_stable, host, container, mitre_execution, T1059.004]

# List of allowed container images that are known to execute binaries not part of their base image.
- list: known_drop_and_execute_containers
items: []

- macro: known_drop_and_execute_activities
condition: (never_true)

- rule: Drop and execute new binary in container
desc: >
Detect if an executable not belonging to the base image of a container is being executed.
The drop and execute pattern can be observed very often after an attacker gained an initial foothold.
is_exe_upper_layer filter field only applies for container runtimes that use overlayfs as union mount filesystem.
Adopters can utilize the provided template list known_drop_and_execute_containers containing allowed container
images known to execute binaries not included in their base image. Alternatively, you could exclude non-production
namespaces in Kubernetes settings by adjusting the rule further. This helps reduce noise by applying application
and environment-specific knowledge to this rule. Common anti-patterns include administrators or SREs performing
ad-hoc debugging.
condition: >
spawned_process
and container
and proc.is_exe_upper_layer=true
and not container.image.repository in (known_drop_and_execute_containers)
and not known_drop_and_execute_activities
output: Executing binary not part of base image (proc_exe=%proc.exe proc_sname=%proc.sname gparent=%proc.aname[2] proc_exe_ino_ctime=%proc.exe_ino.ctime proc_exe_ino_mtime=%proc.exe_ino.mtime proc_exe_ino_ctime_duration_proc_start=%proc.exe_ino.ctime_duration_proc_start proc_cwd=%proc.cwd container_start_ts=%container.start_ts evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty exe_flags=%evt.arg.flags %container.info)
priority: CRITICAL
tags: [maturity_stable, container, process, mitre_persistence, TA0003, PCI_DSS_11.5.1]

# RFC1918 addresses were assigned for private network usage
- list: rfc_1918_addresses
items: ['"10.0.0.0/8"', '"172.16.0.0/12"', '"192.168.0.0/16"']

- macro: outbound
condition: >
(((evt.type = connect and evt.dir=<) or
(evt.type in (sendto,sendmsg) and evt.dir=< and
fd.l4proto != tcp and fd.connected=false and fd.name_changed=true)) and
(fd.typechar = 4 or fd.typechar = 6) and
(fd.ip != "0.0.0.0" and fd.net != "127.0.0.0/8" and not fd.snet in (rfc_1918_addresses)) and
(evt.rawres >= 0 or evt.res = EINPROGRESS))

- list: ssh_non_standard_ports
items: [80, 8080, 88, 443, 8443, 53, 4444]

- macro: ssh_non_standard_ports_network
condition: (fd.sport in (ssh_non_standard_ports))

- rule: Disallowed SSH Connection Non Standard Port
desc: >
Detect any new outbound SSH connection from the host or container using a non-standard port. This rule holds the potential
to detect a family of reverse shells that cause the victim machine to connect back out over SSH, with STDIN piped from
the SSH connection to a shell's STDIN, and STDOUT of the shell piped back over SSH. Such an attack can be launched against
any app that is vulnerable to command injection. The upstream rule only covers a limited selection of non-standard ports.
We suggest adding more ports, potentially incorporating ranges based on your environment's knowledge and custom SSH port
configurations. This rule can complement the "Redirect STDOUT/STDIN to Network Connection in Container" or
"Disallowed SSH Connection" rule.
condition: >
outbound
and proc.exe endswith ssh
and fd.l4proto=tcp
and ssh_non_standard_ports_network
output: Disallowed SSH Connection (connection=%fd.name lport=%fd.lport rport=%fd.rport fd_type=%fd.type fd_proto=fd.l4proto evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)
priority: NOTICE
tags: [maturity_stable, host, container, network, process, mitre_execution, T1059]

- list: known_memfd_execution_binaries
items: []

- macro: known_memfd_execution_processes
condition: (proc.name in (known_memfd_execution_binaries))

- rule: Fileless execution via memfd_create
desc: >
Detect if a binary is executed from memory using the memfd_create technique. This is a well-known defense evasion
technique for executing malware on a victim machine without storing the payload on disk and to avoid leaving traces
about what has been executed. Adopters can whitelist processes that may use fileless execution for benign purposes
by adding items to the list known_memfd_execution_processes.
condition: >
spawned_process
and proc.is_exe_from_memfd=true
and not known_memfd_execution_processes
output: Fileless execution via memfd_create (container_start_ts=%container.start_ts proc_cwd=%proc.cwd evt_res=%evt.res proc_sname=%proc.sname gparent=%proc.aname[2] evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty exe_flags=%evt.arg.flags %container.info)
priority: CRITICAL
tags: [maturity_stable, host, container, process, mitre_defense_evasion, T1620]

Reglas de Auditoría K8s

k8s audit rules github

root@cks-master:~# k --namespace falco exec -it pods/falco-wvb64 -- sh -c "cat /etc/falco/k8s_audit_rules.yaml"
Defaulted container "falco" out of: falco, falcoctl-artifact-follow, falco-driver-loader (init), falcoctl-artifact-install (init)
- required_engine_version: 15

- required_plugin_versions:
- name: k8saudit
version: 0.7.0
alternatives:
- name: k8saudit-eks
version: 0.4.0
- name: json
version: 0.7.0

# Like always_true/always_false, but works with k8s audit events
- macro: k8s_audit_always_true
condition: (jevt.rawtime exists)

- macro: k8s_audit_never_true
condition: (jevt.rawtime=0)

# Generally only consider audit events once the response has completed
- list: k8s_audit_stages
items: ["ResponseComplete"]

# Generally exclude users starting with "system:"
- macro: non_system_user
condition: (not ka.user.name startswith "system:")

# This macro selects the set of Audit Events used by the below rules.
- macro: kevt
condition: (jevt.value[/stage] in (k8s_audit_stages))

- macro: kevt_started
condition: (jevt.value[/stage]=ResponseStarted)

# If you wish to restrict activity to a specific set of users, override/append to this list.
# users created by kops are included
- list: vertical_pod_autoscaler_users
items: ["vpa-recommender", "vpa-updater"]

- list: allowed_k8s_users
items: [
"minikube", "minikube-user", "kubelet", "kops", "admin", "kube", "kube-proxy", "kube-apiserver-healthcheck",
"kubernetes-admin",
vertical_pod_autoscaler_users,
cluster-autoscaler,
"system:addon-manager",
"cloud-controller-manager",
"system:kube-controller-manager"
]

- list: eks_allowed_k8s_users
items: [
"eks:node-manager",
"eks:certificate-controller",
"eks:fargate-scheduler",
"eks:k8s-metrics",
"eks:authenticator",
"eks:cluster-event-watcher",
"eks:nodewatcher",
"eks:pod-identity-mutating-webhook",
"eks:cloud-controller-manager",
"eks:vpc-resource-controller",
"eks:addon-manager",
]
-
- rule: Disallowed K8s User
desc: Detect any k8s operation by users outside of an allowed set of users.
condition: kevt and non_system_user and not ka.user.name in (allowed_k8s_users) and not ka.user.name in (eks_allowed_k8s_users)
output: K8s Operation performed by user not in allowed list of users (user=%ka.user.name target=%ka.target.name/%ka.target.resource verb=%ka.verb uri=%ka.uri resp=%ka.response.code)
priority: WARNING
source: k8s_audit
tags: [k8s]

# In a local/user rules file, you could override this macro to
# explicitly enumerate the container images that you want to run in
# your environment. In this main falco rules file, there isn't any way
# to know all the containers that can run, so any container is
# allowed, by using the always_true macro. In the overridden macro, the condition
# would look something like (ka.req.pod.containers.image.repository in (my-repo/my-image))
- macro: allowed_k8s_containers
condition: (k8s_audit_always_true)

- macro: response_successful
condition: (ka.response.code startswith 2)

- macro: kget
condition: ka.verb=get

- macro: kcreate
condition: ka.verb=create

- macro: kmodify
condition: (ka.verb in (create,update,patch))

- macro: kdelete
condition: ka.verb=delete

- macro: pod
condition: ka.target.resource=pods and not ka.target.subresource exists

- macro: pod_subresource
condition: ka.target.resource=pods and ka.target.subresource exists

- macro: deployment
condition: ka.target.resource=deployments

- macro: service
condition: ka.target.resource=services

- macro: configmap
condition: ka.target.resource=configmaps

- macro: namespace
condition: ka.target.resource=namespaces

- macro: serviceaccount
condition: ka.target.resource=serviceaccounts

- macro: clusterrole
condition: ka.target.resource=clusterroles

- macro: clusterrolebinding
condition: ka.target.resource=clusterrolebindings

- macro: role
condition: ka.target.resource=roles

- macro: secret
condition: ka.target.resource=secrets

- macro: health_endpoint
condition: ka.uri=/healthz or ka.uri startswith /healthz?

- macro: live_endpoint
condition: ka.uri=/livez or ka.uri startswith /livez?

- macro: ready_endpoint
condition: ka.uri=/readyz or ka.uri startswith /readyz?

- rule: Create Disallowed Pod
desc: >
Detect an attempt to start a pod with a container image outside of a list of allowed images.
condition: kevt and pod and kcreate and not allowed_k8s_containers
output: Pod started with container not in allowed list (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image)
priority: WARNING
source: k8s_audit
tags: [k8s]

- rule: Create Privileged Pod
desc: >
Detect an attempt to start a pod with a privileged container
condition: kevt and pod and kcreate and ka.req.pod.containers.privileged intersects (true) and not ka.req.pod.containers.image.repository in (falco_privileged_images)
output: Pod started with privileged container (user=%ka.user.name pod=%ka.resp.name resource=%ka.target.resource ns=%ka.target.namespace images=%ka.req.pod.containers.image)
priority: WARNING
source: k8s_audit
tags: [k8s]

- macro: sensitive_vol_mount
condition: >
(ka.req.pod.volumes.hostpath intersects (/proc, /var/run/docker.sock, /, /etc, /root, /var/run/crio/crio.sock, /run/containerd/containerd.sock, /home/admin, /var/lib/kubelet, /var/lib/kubelet/pki, /etc/kubernetes, /etc/kubernetes/manifests))

- rule: Create Sensitive Mount Pod
desc: >
Detect an attempt to start a pod with a volume from a sensitive host directory (i.e. /proc).
Exceptions are made for known trusted images.
condition: kevt and pod and kcreate and sensitive_vol_mount and not ka.req.pod.containers.image.repository in (falco_sensitive_mount_images)
output: Pod started with sensitive mount (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace resource=%ka.target.resource images=%ka.req.pod.containers.image volumes=%jevt.value[/requestObject/spec/volumes])
priority: WARNING
source: k8s_audit
tags: [k8s]

# These container images are allowed to run with hostnetwork=true
# TODO: Remove k8s.gcr.io reference after 01/Dec/2023
- list: falco_hostnetwork_images
items: [
gcr.io/google-containers/prometheus-to-sd,
gcr.io/projectcalico-org/typha,
gcr.io/projectcalico-org/node,
gke.gcr.io/gke-metadata-server,
gke.gcr.io/kube-proxy,
gke.gcr.io/netd-amd64,
k8s.gcr.io/ip-masq-agent-amd64,
k8s.gcr.io/prometheus-to-sd,
registry.k8s.io/ip-masq-agent-amd64,
registry.k8s.io/prometheus-to-sd
]

# Corresponds to K8s CIS Benchmark 1.7.4
- rule: Create HostNetwork Pod
desc: Detect an attempt to start a pod using the host network.
condition: kevt and pod and kcreate and ka.req.pod.host_network intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostnetwork_images)
output: Pod started using host network (user=%ka.user.name pod=%ka.resp.name resource=%ka.target.resource ns=%ka.target.namespace images=%ka.req.pod.containers.image)
priority: WARNING
source: k8s_audit
tags: [k8s]

- list: falco_hostpid_images
items: []

- rule: Create HostPid Pod
desc: Detect an attempt to start a pod using the host pid namespace.
condition: kevt and pod and kcreate and ka.req.pod.host_pid intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostpid_images)
output: Pod started using host pid namespace (user=%ka.user.name pod=%ka.resp.name resource=%ka.target.resource ns=%ka.target.namespace images=%ka.req.pod.containers.image)
priority: WARNING
source: k8s_audit
tags: [k8s]

- list: falco_hostipc_images
items: []

- rule: Create HostIPC Pod
desc: Detect an attempt to start a pod using the host ipc namespace.
condition: kevt and pod and kcreate and ka.req.pod.host_ipc intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostipc_images)
output: Pod started using host ipc namespace (user=%ka.user.name pod=%ka.resp.name resource=%ka.target.resource ns=%ka.target.namespace images=%ka.req.pod.containers.image)
priority: WARNING
source: k8s_audit
tags: [k8s]

- macro: user_known_node_port_service
condition: (k8s_audit_never_true)

- rule: Create NodePort Service
desc: >
Detect an attempt to start a service with a NodePort service type
condition: kevt and service and kcreate and ka.req.service.type=NodePort and not user_known_node_port_service
output: NodePort Service Created (user=%ka.user.name service=%ka.target.name resource=%ka.target.resource ns=%ka.target.namespace ports=%ka.req.service.ports)
priority: WARNING
source: k8s_audit
tags: [k8s]

- macro: contains_private_credentials
condition: >
(ka.req.configmap.obj contains "aws_access_key_id" or
ka.req.configmap.obj contains "aws-access-key-id" or
ka.req.configmap.obj contains "aws_s3_access_key_id" or
ka.req.configmap.obj contains "aws-s3-access-key-id" or
ka.req.configmap.obj contains "password" or
ka.req.configmap.obj contains "passphrase")

- rule: Create/Modify Configmap With Private Credentials
desc: >
Detect creating/modifying a configmap containing a private credential (aws key, password, etc.)
condition: kevt and configmap and kmodify and contains_private_credentials
output: K8s configmap with private credential (user=%ka.user.name verb=%ka.verb resource=%ka.target.resource configmap=%ka.req.configmap.name config=%ka.req.configmap.obj)
priority: WARNING
source: k8s_audit
tags: [k8s]

# Corresponds to K8s CIS Benchmark, 1.1.1.
- rule: Anonymous Request Allowed
desc: >
Detect any request made by the anonymous user that was allowed
condition: kevt and ka.user.name=system:anonymous and ka.auth.decision="allow" and not health_endpoint and not live_endpoint and not ready_endpoint
output: Request by anonymous user allowed (user=%ka.user.name verb=%ka.verb uri=%ka.uri reason=%ka.auth.reason))
priority: WARNING
source: k8s_audit
tags: [k8s]

# Roughly corresponds to K8s CIS Benchmark, 1.1.12. In this case,
# notifies an attempt to exec/attach to a privileged container.

# Ideally, we'd add a more stringent rule that detects attaches/execs
# to a privileged pod, but that requires the engine for k8s audit
# events to be stateful, so it could know if a container named in an
# attach request was created privileged or not. For now, we have a
# less severe rule that detects attaches/execs to any pod.
#
# For the same reason, you can't use things like image names/prefixes,
# as the event that creates the pod (which has the images) is a
# separate event than the actual exec/attach to the pod.

- macro: user_known_exec_pod_activities
condition: (k8s_audit_never_true)

- rule: Attach/Exec Pod
desc: >
Detect any attempt to attach/exec to a pod
condition: kevt_started and pod_subresource and kcreate and ka.target.subresource in (exec,attach) and not user_known_exec_pod_activities
output: Attach/Exec to pod (user=%ka.user.name pod=%ka.target.name resource=%ka.target.resource ns=%ka.target.namespace action=%ka.target.subresource command=%ka.uri.param[command])
priority: NOTICE
source: k8s_audit
tags: [k8s]

- macro: user_known_portforward_activities
condition: (k8s_audit_never_true)

- rule: port-forward
desc: >
Detect any attempt to portforward
condition: ka.target.subresource in (portforward) and not user_known_portforward_activities
output: Portforward to pod (user=%ka.user.name pod=%ka.target.name ns=%ka.target.namespace action=%ka.target.subresource )
priority: NOTICE
source: k8s_audit
tags: [k8s]

- macro: user_known_pod_debug_activities
condition: (k8s_audit_never_true)

# Only works when feature gate EphemeralContainers is enabled
- rule: EphemeralContainers Created
desc: >
Detect any ephemeral container created
condition: kevt and pod_subresource and kmodify and ka.target.subresource in (ephemeralcontainers) and not user_known_pod_debug_activities
output: Ephemeral container is created in pod (user=%ka.user.name pod=%ka.target.name resource=%ka.target.resource ns=%ka.target.namespace ephemeral_container_name=%jevt.value[/requestObject/ephemeralContainers/0/name] ephemeral_container_image=%jevt.value[/requestObject/ephemeralContainers/0/image])
priority: NOTICE
source: k8s_audit
tags: [k8s]

# In a local/user rules fie, you can append to this list to add additional allowed namespaces
- list: allowed_namespaces
items: [kube-system, kube-public, default]

- rule: Create Disallowed Namespace
desc: Detect any attempt to create a namespace outside of a set of known namespaces
condition: kevt and namespace and kcreate and not ka.target.name in (allowed_namespaces)
output: Disallowed namespace created (user=%ka.user.name ns=%ka.target.name resource=%ka.target.resource)
priority: WARNING
source: k8s_audit
tags: [k8s]

# Only defined for backwards compatibility. Use the more specific
# user_allowed_kube_namespace_image_list instead.
- list: user_trusted_image_list
items: []

- list: user_allowed_kube_namespace_image_list
items: [user_trusted_image_list]

# Only defined for backwards compatibility. Use the more specific
# allowed_kube_namespace_image_list instead.
- list: k8s_image_list
items: []

# TODO: Remove k8s.gcr.io reference after 01/Dec/2023
- list: allowed_kube_namespace_image_list
items: [
gcr.io/google-containers/prometheus-to-sd,
gcr.io/projectcalico-org/node,
gke.gcr.io/addon-resizer,
gke.gcr.io/heapster,
gke.gcr.io/gke-metadata-server,
k8s.gcr.io/ip-masq-agent-amd64,
k8s.gcr.io/kube-apiserver,
registry.k8s.io/ip-masq-agent-amd64,
registry.k8s.io/kube-apiserver,
gke.gcr.io/kube-proxy,
gke.gcr.io/netd-amd64,
gke.gcr.io/watcher-daemonset,
k8s.gcr.io/addon-resizer,
k8s.gcr.io/prometheus-to-sd,
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64,
k8s.gcr.io/k8s-dns-kube-dns-amd64,
k8s.gcr.io/k8s-dns-sidecar-amd64,
k8s.gcr.io/metrics-server-amd64,
registry.k8s.io/addon-resizer,
registry.k8s.io/prometheus-to-sd,
registry.k8s.io/k8s-dns-dnsmasq-nanny-amd64,
registry.k8s.io/k8s-dns-kube-dns-amd64,
registry.k8s.io/k8s-dns-sidecar-amd64,
registry.k8s.io/metrics-server-amd64,
kope/kube-apiserver-healthcheck,
k8s_image_list
]

- macro: allowed_kube_namespace_pods
condition: (ka.req.pod.containers.image.repository in (user_allowed_kube_namespace_image_list) or
ka.req.pod.containers.image.repository in (allowed_kube_namespace_image_list))

# Detect any new pod created in the kube-system namespace
- rule: Pod Created in Kube Namespace
desc: Detect any attempt to create a pod in the kube-system or kube-public namespaces
condition: kevt and pod and kcreate and ka.target.namespace in (kube-system, kube-public) and not allowed_kube_namespace_pods
output: Pod created in kube namespace (user=%ka.user.name pod=%ka.resp.name resource=%ka.target.resource ns=%ka.target.namespace images=%ka.req.pod.containers.image)
priority: WARNING
source: k8s_audit
tags: [k8s]

- list: user_known_sa_list
items: []

- list: known_sa_list
items: [
coredns,
coredns-autoscaler,
cronjob-controller,
daemon-set-controller,
deployment-controller,
disruption-controller,
endpoint-controller,
endpointslice-controller,
endpointslicemirroring-controller,
generic-garbage-collector,
horizontal-pod-autoscaler,
job-controller,
namespace-controller,
node-controller,
persistent-volume-binder,
pod-garbage-collector,
pv-protection-controller,
pvc-protection-controller,
replicaset-controller,
resourcequota-controller,
root-ca-cert-publisher,
service-account-controller,
statefulset-controller
]

- macro: trusted_sa
condition: (ka.target.name in (known_sa_list, user_known_sa_list))

# Detect creating a service account in the kube-system/kube-public namespace
- rule: Service Account Created in Kube Namespace
desc: Detect any attempt to create a serviceaccount in the kube-system or kube-public namespaces
condition: kevt and serviceaccount and kcreate and ka.target.namespace in (kube-system, kube-public) and response_successful and not trusted_sa
output: Service account created in kube namespace (user=%ka.user.name serviceaccount=%ka.target.name resource=%ka.target.resource ns=%ka.target.namespace)
priority: WARNING
source: k8s_audit
tags: [k8s]

# Detect any modify/delete to any ClusterRole starting with
# "system:". "system:coredns" is excluded as changes are expected in
# normal operation.
- rule: System ClusterRole Modified/Deleted
desc: Detect any attempt to modify/delete a ClusterRole/Role starting with system
condition: kevt and (role or clusterrole) and (kmodify or kdelete) and (ka.target.name startswith "system:") and
not ka.target.name in (system:coredns, system:managed-certificate-controller)
output: System ClusterRole/Role modified or deleted (user=%ka.user.name role=%ka.target.name resource=%ka.target.resource ns=%ka.target.namespace action=%ka.verb)
priority: WARNING
source: k8s_audit
tags: [k8s]

# Detect any attempt to create a ClusterRoleBinding to the cluster-admin user
# (expand this to any built-in cluster role that does "sensitive" things)
- rule: Attach to cluster-admin Role
desc: Detect any attempt to create a ClusterRoleBinding to the cluster-admin user
condition: kevt and clusterrolebinding and kcreate and ka.req.binding.role=cluster-admin
output: Cluster Role Binding to cluster-admin role (user=%ka.user.name subject=%ka.req.binding.subjects)
priority: WARNING
source: k8s_audit
tags: [k8s]

- rule: ClusterRole With Wildcard Created
desc: Detect any attempt to create a Role/ClusterRole with wildcard resources or verbs
condition: kevt and (role or clusterrole) and kcreate and (ka.req.role.rules.resources intersects ("*") or ka.req.role.rules.verbs intersects ("*"))
output: Created Role/ClusterRole with wildcard (user=%ka.user.name role=%ka.target.name resource=%ka.target.resource rules=%ka.req.role.rules)
priority: WARNING
source: k8s_audit
tags: [k8s]

- macro: writable_verbs
condition: >
(ka.req.role.rules.verbs intersects (create, update, patch, delete, deletecollection))

- rule: ClusterRole With Write Privileges Created
desc: Detect any attempt to create a Role/ClusterRole that can perform write-related actions
condition: kevt and (role or clusterrole) and kcreate and writable_verbs
output: Created Role/ClusterRole with write privileges (user=%ka.user.name role=%ka.target.name resource=%ka.target.resource rules=%ka.req.role.rules)
priority: NOTICE
source: k8s_audit
tags: [k8s]

- rule: ClusterRole With Pod Exec Created
desc: Detect any attempt to create a Role/ClusterRole that can exec to pods
condition: kevt and (role or clusterrole) and kcreate and ka.req.role.rules.resources intersects ("pods/exec")
output: Created Role/ClusterRole with pod exec privileges (user=%ka.user.name role=%ka.target.name resource=%ka.target.resource rules=%ka.req.role.rules)
priority: WARNING
source: k8s_audit
tags: [k8s]

# The rules below this point are less discriminatory and generally
# represent a stream of activity for a cluster. If you wish to disable
# these events, modify the following macro.
- macro: consider_activity_events
condition: (k8s_audit_always_true)

- macro: kactivity
condition: (kevt and consider_activity_events)

- rule: K8s Deployment Created
desc: Detect any attempt to create a deployment
condition: (kactivity and kcreate and deployment and response_successful)
output: K8s Deployment Created (user=%ka.user.name deployment=%ka.target.name ns=%ka.target.namespace resource=%ka.target.resource resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]

- rule: K8s Deployment Deleted
desc: Detect any attempt to delete a deployment
condition: (kactivity and kdelete and deployment and response_successful)
output: K8s Deployment Deleted (user=%ka.user.name deployment=%ka.target.name ns=%ka.target.namespace resource=%ka.target.resource resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]

- rule: K8s Service Created
desc: Detect any attempt to create a service
condition: (kactivity and kcreate and service and response_successful)
output: K8s Service Created (user=%ka.user.name service=%ka.target.name ns=%ka.target.namespace resource=%ka.target.resource resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]

- rule: K8s Service Deleted
desc: Detect any attempt to delete a service
condition: (kactivity and kdelete and service and response_successful)
output: K8s Service Deleted (user=%ka.user.name service=%ka.target.name ns=%ka.target.namespace resource=%ka.target.resource resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]

- rule: K8s ConfigMap Created
desc: Detect any attempt to create a configmap
condition: (kactivity and kcreate and configmap and response_successful)
output: K8s ConfigMap Created (user=%ka.user.name configmap=%ka.target.name ns=%ka.target.namespace resource=%ka.target.resource resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]

- rule: K8s ConfigMap Deleted
desc: Detect any attempt to delete a configmap
condition: (kactivity and kdelete and configmap and response_successful)
output: K8s ConfigMap Deleted (user=%ka.user.name configmap=%ka.target.name ns=%ka.target.namespace resource=%ka.target.resource resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]

- rule: K8s Namespace Created
desc: Detect any attempt to create a namespace
condition: (kactivity and kcreate and namespace and response_successful)
output: K8s Namespace Created (user=%ka.user.name namespace=%ka.target.name resource=%ka.target.resource resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]

- rule: K8s Namespace Deleted
desc: Detect any attempt to delete a namespace
condition: (kactivity and non_system_user and kdelete and namespace and response_successful)
output: K8s Namespace Deleted (user=%ka.user.name namespace=%ka.target.name resource=%ka.target.resource resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]

- rule: K8s Serviceaccount Created
desc: Detect any attempt to create a service account
condition: (kactivity and kcreate and serviceaccount and response_successful)
output: K8s Serviceaccount Created (user=%ka.user.name serviceaccount=%ka.target.name ns=%ka.target.namespace resource=%ka.target.resource resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]

- rule: K8s Serviceaccount Deleted
desc: Detect any attempt to delete a service account
condition: (kactivity and kdelete and serviceaccount and response_successful)
output: K8s Serviceaccount Deleted (user=%ka.user.name serviceaccount=%ka.target.name ns=%ka.target.namespace resource=%ka.target.resource resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]

- rule: K8s Role/Clusterrole Created
desc: Detect any attempt to create a cluster role/role
condition: (kactivity and kcreate and (clusterrole or role) and response_successful)
output: K8s Cluster Role Created (user=%ka.user.name role=%ka.target.name resource=%ka.target.resource rules=%ka.req.role.rules resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]

- rule: K8s Role/Clusterrole Deleted
desc: Detect any attempt to delete a cluster role/role
condition: (kactivity and kdelete and (clusterrole or role) and response_successful)
output: K8s Cluster Role Deleted (user=%ka.user.name role=%ka.target.name resource=%ka.target.resource resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]

- rule: K8s Role/Clusterrolebinding Created
desc: Detect any attempt to create a clusterrolebinding
condition: (kactivity and kcreate and clusterrolebinding and response_successful)
output: K8s Cluster Role Binding Created (user=%ka.user.name binding=%ka.target.name resource=%ka.target.resource subjects=%ka.req.binding.subjects role=%ka.req.binding.role resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]

- rule: K8s Role/Clusterrolebinding Deleted
desc: Detect any attempt to delete a clusterrolebinding
condition: (kactivity and kdelete and clusterrolebinding and response_successful)
output: K8s Cluster Role Binding Deleted (user=%ka.user.name binding=%ka.target.name resource=%ka.target.resource resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]

- rule: K8s Secret Created
desc: Detect any attempt to create a secret. Service account tokens are excluded.
condition: (kactivity and kcreate and secret and ka.target.namespace!=kube-system and non_system_user and response_successful)
output: K8s Secret Created (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resource=%ka.target.resource resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]

- rule: K8s Secret Deleted
desc: Detect any attempt to delete a secret. Service account tokens are excluded.
condition: (kactivity and kdelete and secret and ka.target.namespace!=kube-system and non_system_user and response_successful)
output: K8s Secret Deleted (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resource=%ka.target.resource resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]

- rule: K8s Secret Get Successfully
desc: >
Detect any attempt to get a secret. Service account tokens are excluded.
condition: >
secret and kget
and kactivity
and response_successful
output: K8s Secret Get Successfully (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resource=%ka.target.resource resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: ERROR
source: k8s_audit
tags: [k8s]

- rule: K8s Secret Get Unsuccessfully Tried
desc: >
Detect an unsuccessful attempt to get the secret. Service account tokens are excluded.
condition: >
secret and kget
and kactivity
and not response_successful
output: K8s Secret Get Unsuccessfully Tried (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resource=%ka.target.resource resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: WARNING
source: k8s_audit
tags: [k8s]

# This rule generally matches all events, and as a result is disabled
# by default. If you wish to enable these events, modify the
# following macro.
# condition: (jevt.rawtime exists)
- macro: consider_all_events
condition: (k8s_audit_never_true)

- macro: kall
condition: (kevt and consider_all_events)

- rule: All K8s Audit Events
desc: Match all K8s Audit Events
condition: kall
output: K8s Audit Event received (user=%ka.user.name verb=%ka.verb uri=%ka.uri obj=%jevt.obj)
priority: DEBUG
source: k8s_audit
tags: [k8s]


# This macro disables following rule, change to k8s_audit_never_true to enable it
- macro: allowed_full_admin_users
condition: (k8s_audit_always_true)

# This list includes some of the default user names for an administrator in several K8s installations
- list: full_admin_k8s_users
items: ["admin", "kubernetes-admin", "kubernetes-admin@kubernetes", "[email protected]", "minikube-user"]

# This rules detect an operation triggered by an user name that is
# included in the list of those that are default administrators upon
# cluster creation. This may signify a permission setting too broader.
# As we can't check for role of the user on a general ka.* event, this
# may or may not be an administrator. Customize the full_admin_k8s_users
# list to your needs, and activate at your discretion.

# # How to test:
# # Execute any kubectl command connected using default cluster user, as:
# kubectl create namespace rule-test

- rule: Full K8s Administrative Access
desc: Detect any k8s operation by a user name that may be an administrator with full access.
condition: >
kevt
and non_system_user
and ka.user.name in (full_admin_k8s_users)
and not allowed_full_admin_users
output: K8s Operation performed by full admin user (user=%ka.user.name target=%ka.target.name/%ka.target.resource verb=%ka.verb uri=%ka.uri resp=%ka.response.code)
priority: WARNING
source: k8s_audit
tags: [k8s]

- macro: ingress
condition: ka.target.resource=ingresses

- macro: ingress_tls
condition: (jevt.value[/requestObject/spec/tls] exists)

# # How to test:
# # Create an ingress.yaml file with content:
# apiVersion: networking.k8s.io/v1beta1
# kind: Ingress
# metadata:
# name: test-ingress
# annotations:
# nginx.ingress.kubernetes.io/rewrite-target: /
# spec:
# rules:
# - http:
# paths:
# - path: /testpath
# backend:
# serviceName: test
# servicePort: 80
# # Execute: kubectl apply -f ingress.yaml

- rule: Ingress Object without TLS Certificate Created
desc: Detect any attempt to create an ingress without TLS certification.
condition: >
(kactivity and kcreate and ingress and response_successful and not ingress_tls)
output: >
K8s Ingress Without TLS Cert Created (user=%ka.user.name ingress=%ka.target.name
namespace=%ka.target.namespace resource=%ka.target.resource)
source: k8s_audit
priority: WARNING
tags: [k8s, network]

- macro: node
condition: ka.target.resource=nodes

- macro: allow_all_k8s_nodes
condition: (k8s_audit_always_true)

- list: allowed_k8s_nodes
items: []

# # How to test:
# # Create a Falco monitored cluster with Kops
# # Increase the number of minimum nodes with:
# kops edit ig nodes
# kops apply --yes

- rule: Untrusted Node Successfully Joined the Cluster
desc: >
Detect a node successfully joined the cluster outside of the list of allowed nodes.
condition: >
kevt and node
and kcreate
and response_successful
and not allow_all_k8s_nodes
and not ka.target.name in (allowed_k8s_nodes)
output: Node not in allowed list successfully joined the cluster (user=%ka.user.name node=%ka.target.name resource=%ka.target.resource)
priority: ERROR
source: k8s_audit
tags: [k8s]

- rule: Untrusted Node Unsuccessfully Tried to Join the Cluster
desc: >
Detect an unsuccessful attempt to join the cluster for a node not in the list of allowed nodes.
condition: >
kevt and node
and kcreate
and not response_successful
and not allow_all_k8s_nodes
and not ka.target.name in (allowed_k8s_nodes)
output: Node not in allowed list tried unsuccessfully to join the cluster (user=%ka.user.name node=%ka.target.name reason=%ka.response.reason resource=%ka.target.resource)
priority: WARNING
source: k8s_audit
tags: [k8s]
root@cks-master:~#