4 Ways to Help PowerShell Find External Tools

Understanding PowerShell's Command Search Process

Alex Angelopoulos

December 22, 2008

15 Min Read
ITPro Today logo


PowerShell lets you create compact, highly portable scripts, but if you don't have much experience using command-line tools, getting PowerShell to find those scripts when you use them can be a problem. Unlike Cmd.exe, in PowerShell the current shell location is not in the command search path, so you can't simply save files to an arbitrary folder and set that as the working location. This is a crucial point, which I’ll explain in more detail momentarily. Now, let’s walk through exactly how command search works and discuss how the command-search process is very different from working with graphical applications. Then I'll show you the techniques available for making PowerShell find external tools correctly.

How PowerShell Command Search Works

Let me start out by defining a special term I use for scripts and applications that reside in files: I call them command files. Command search is all about finding command files by their name.

The Windows GUI doesn't have a precise analog to the concept of a command-search path. Normally, you run an application by clicking an icon. That icon either points directly to an application in a known, specific location or to a document that has a document handler application in a known, specific location. In the case of documents, Windows has to look up the application location in the registry, but for all practical purposes, this is deterministic; it isn't a search.

When you give PowerShell a name it interprets as a command, however, something a little different happens. PowerShell is able to precisely identify the command if it is a function, filter, or cmdlet name already loaded into PowerShell; if this is the case, then there is no search process. If the command name isn't one of these internal command types, PowerShell then checks to see if what you entered looks like an explicit command path, such as c:windowsotepad.exe or someapp.exe. If the entered command isn't an explicit command path, PowerShell searches for the command. (If you're wondering why I don't mention aliases, it's because PowerShell resolves aliases to the command name used in the alias definition; they're not true command names). Let's say we enter asdf.x, which won't be a command or document name on your system; this forces PowerShell to go through the entire command-search process.

PowerShell's first step in looking for the command is to check the environment path variable. This variable contains a semicolon-separated list of directories where PowerShell will look for the command. You can see this set of directories if you type $env:path at a PowerShell prompt. If your path variable contains the string

C:Windowssystem32;C:Windows;C:Windowssystem32wbem; C:WindowsSystem32WindowsPowerShellv1.0

then PowerShell will check the following directories in the order shown:

C:Windowssystem32 C:Windows C:Windowssystem32wbem C:WindowsSystem32WindowsPowerShellv1.0

PowerShell begins the search in C:WindowsSystem32 by checking for a file with the exact name asdf.x. If there were a file C:WindowsSystem32 asdf.x, then PowerShell would pass the request to open the file over to Windows Explorer. When Windows Explorer can't find a document handler for .x files, Explorer automatically prompts you to find the application to use to open .x files. PowerShell can't control the prompting for a document handler.

Assuming there was no asdf.x file, PowerShell next assumes that you might have entered the name for an executable file, but without the executable file extension. Here we're using the term executable in a broad sense: PowerShell treats files as executables if they end in .PS1 (for PowerShell scripts) or any of the extensions found in the pathext environment variable. If you enter $env:pathext at a PowerShell prompt, you'll a list, which should look something like this:

.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC;PSC1

PowerShell next checks for the file C:WindowsSystem32asdf.x.PS1. If that's not found, PowerShell tries each of the types in pathext in turn, asking the directory C:WindowsSystem32 if it contains asdf.x.COM, asdf.x.EXE, asdf.x.BAT, asdf.x.CMD, asdf.x.VBS, asdf.x.VBE, asdf.x.JS, asdf.x.JSE, asdf.x.WSF, asdf.x.WSH, asdf.x.MSC, and finally asdf.x.PSC1.

If PowerShell hasn't found a file at this point, it goes on to the next directory, C:Windows, and starts the matching process again. When PowerShell finishes checking the last directory for the last possible match and still hasn't succeeded, it makes one more search attempt. Here's how it works.

If the command name you used didn't begin with "get-", PowerShell considers the possibility that you might be using a short name for a "getter" script. PowerShell now repeats the entire process using the base name get-asdf.x. If this still doesn't produce a matching name, PowerShell gives you the error message The term 'asdf.x' is not recognized as a cmdlet, function, operable program, or script file. Verify the term and try again.

We can summarize the logic behind the process. Roughly, in crude pseudo-code, it's this: "If a term X is not a PowerShell internal command, and is not a precise path to an external file, then, for each directory listed in $env:path do the following and return the first match: Check the directory for a file named X; then check for a file X.PS1; then check for X with each extension listed in $env:pathext appended in turn."

You can actually generate a demonstration easily with the Show-CommandResolution script in Listing 1.

 

 

 

 

 


If you run Show-CommandResolution with a name as an argument, the script automatically generates all of the possible attempted name matches for the current system configuration. In other words, you can get different results on different systems, but the results are always correct. Try typing a line such as

Show-CommandResolution asdf.x

to see for yourself. You can also use Get-Command to show all of the real files that match a particular name; Get-Command also shows PowerShell internal commands. For example, typing

Get-Command sc

will show you that sc matches an alias mapped to Set-Content as well as the Service Controller application, sc.exe.

This raises an interesting point: Conflicts can and will occur between names of various applications. So it's useful to be aware of how command resolution works and to check names with Get-Command if you think there's a problem.

Why Command Search is Do-it-Yourself

Now that we've looked at how command search works, it will be somewhat easier to understand why applications you install don't automatically provide working command search.

It all starts with the way that application installation has developed since the advent of the Windows GUI. Application installation procedures have been heavily oriented towards supporting the mass of consumer applications and remediating problems with those installations. These applications are generally graphical standalone applications that are immersive: You don't directly chain together the applications yourself. To reduce the likelihood of application installations causing problems for other applications, applications must be installed in their own distinct directories. There is minimal support for analyzing and working with the path during application installations—since path search can cause significant problems for mixed-version DLLs, this is by design as well.

Unfortunately, this lack of support for path search leaves command-line tool users out in the cold. You could write an installer routine for each application that adds the application install directory to your search path, but that's very unattractive. Command-line tools are generally a la carte items, and not only would separate installations produce a lot of extra effort and packaging for something that's supposed to be a drop-in component, but they'd produce enormous search paths. I have roughly 600 command-line tools and scripts that I use occasionally; adding a path such as C:Program FilesDeveloper CompanyToolName to my search path for each one would cause the search path to balloon and command search to slow down to a crawl.

An ideal solution would be to use a common tools folder that supports single-tool, drag-and-drop or built-in self-installer scripting, but there's no such thing on the Windows platform. Although you can create your own tools folder, which is the best general solution to helping PowerShell find tools, it's not as satisfactory a solution as it could be.

For comparison, consider the UNIX family of OSs. They not only come with a wide range of command-line tools preinstalled, they also have a standard directory for adding small executables: the /usr/bin folder. Tools generally come with an installer script that handles placing them there, and even programs distributed as source files include installation as part of the make script. You don't need to be aware of any of these details to add new applications—you just need to run the install script. If you get a precompiled tool with no installer, you just need to know about /usr/bin; all you do is place the tool there and tell UNIX it's an executable.

In contrast, on Windows you're on your own. You need to understand that you must manually place executables in your search path. You decide where to put a tools folder and then create it. You set permissions appropriately on the folder. Then you epeat this process manually for each and every system where you need to use the tool or tools you're going to install. Finally, you put the tools in that location. It's easy to describe, but particularly if you're just starting to use command-line tools, it involves both decision-making and time.

Putting Scripts and Command-line Tools into Your Home Directory

One theoretically viable solution is to save scripts or command-line applications to your user home directory. You can see this location from a PowerShell prompt by entering

$home

In my case, the home directory is C:Usersaka. So I would save the script Show-CommandResolution.ps1 to C:UsersakaShow-CommandResolution.ps1. Now, this does not precisely put the tool into my search path. However, if you prefix a command name with [.\], PowerShell looks for that command in the current PowerShell location, which is initially your user home directory.

Saving scripts to your user home directory doesn't require any special permissions, and actually eliminates the need for command resolution for this command. However, it does have some drawbacks: It's awkward, sloppy, and if you use PowerShell's Set-Location cmdlet to modify your location, it won't work because the prefix [.\] always searches relative to the current PowerShell location, and finally, tools are only available to the current user when they're kept in this location.

Using PowerShell Aliases

Another technique is to set up aliases for tools. I have the installutil command-line tool (used for merging multiple .NET assemblies into a single file) installed on my PC. It automatically installs to C:Program Files (x86)Microsoftinstallutilinstallutil.exe. At a PowerShell prompt or within a PowerShell profile script, I can simply set up an alias that points to the application:

Set-Alias -Name installutil -Value $env:windirmicrosoft.net frameworkv2.0.50727 installutil.exe

PowerShell will then automatically find program any time I type the name installutil. If you have command-line applications already installed to various locations automatically, this is a viable way to make them accessible. However, this method requires that you explicitly set up an alias for each tool. You will also need to set up the alias each time you use PowerShell unless you've added the alias to your PowerShell profile script and are using the same personal profile. For more details about configuring PowerShell profiles, see "What You Need to Know to Start Using PowerShell's Personal Profile Scripts," InstantDoc ID 97669 (http://windowsitpro.com/article/articleid/97669/what-you-need-to-know-to-start-using-powershells-personal-profile-scripts.html).

Adding a Custom Tools Directory

My preferred technique is to create a custom folder for tools, then add the folder to my search path. I keep all of my addon applications and scripts in that folder. I automatically get version control this way, as well. If I add a tool that uses the same name as an existing tool, I know as soon as I attempt to copy it; Windows warns me that the file already exists in the target directory. You can also synchronize the folder to removable media, which is an important feature as most of these tools are portable and are useful for daily work in various locations.

Although the net effect of adding a directory to the search path is the same no matter how you do it, you do have several options for how to add a specific directory. These options differ depending on where you want the directory to appear and how visible you want it to be.

With the first option, you put the added directory in the search path for all users and processes on the computer; this also makes it useful for accessing tools used from Cmd.exe. To do so, you simply add a semicolon and the folder path to the system's path variable. You can reach that variable by opening the System application from the Control Panel. On the Advanced tab, click the Environment Variables button to display user and system-wide variables, which Figure 1 shows.

 

 

 

In the bottom section for system-wide variables, select the Path variable, which Figure 2 shows, and click the Edit… button. You can then add the new directory.

 

 

Don't forget that you must separate the directory paths with semicolons. Also, if you're going to use a network location, use a drive letter mapped by all users, not a Universal Naming Convention (UNC) path.

The second technique is to add the directory with a statement in a PowerShell profile script. This makes the directory automatically accessible in all PowerShell sessions. PowerShell exposes classic shell variables through the env: drive, so to add the directory C:appsbin to your search path, you would simply do this:

$env:path += ";c:appsbin"

where += is a shorthand expression in PowerShell that means "add this to the variable I just named"; the above statement is equivalent to $env:path = $env:path + ";c:appsbin".

You can also use this technique from a PowerShell prompt as needed. I keep my portable scripts and command-line tools on a USB drive in a folder named bin. When I'm making a site visit and need the tools, I simply plug in the USB drive and check the drive letter assigned to it. Assuming Windows assigned the letter E to the USB drive, in my PowerShell session I simply enter

$env:path += ";E:bin"

and PowerShell automatically searches E:bin for unfound commands for the rest of the PowerShell session.

Using the PowerShell Directory

Another technique for making commands easy to find is to save any command files to a directory that's already in your search path, such as the PowerShell directory itself. The path to this directory is contained in the PowerShell variable $pshome, and is a location such as C:WindowsSystem32WindowsPowerShellv1.0.

If you want to have machine-wide PowerShell profile scripts, you're already committed to inserting the scripts into this folder, even though best practices for application vendors are always to stay away from modifications to the Windows folder or anything within it. Using the PowerShell directory offers some advantages, beyond not needing to add a custom folder to the search path yourself:

  • Tools copied here will be available to anyone on the system.

  • Tools copied here will inherit permissions from the Windows directory.

  • You don't need to add anything to the command search path. Since you might need to add one or two scripts to this location for machine-wide profiles, it's a known location that can be modified by administrators, and if you perform a complete uninstall of PowerShell followed by manually deleting this folder, you'll be able to eliminate bad modifications.


Unfortunately, there are also a great many disadvantages to using the PowerShell directory. In fact, the disadvantages outweigh the ease of use:

  • Using a system or general application directory is a bad practice in general for developers. Although I find it reasonable for an administrator to do this selectively, and even though it's necessary for machine-wide profiles, the fact that this technique is off the beaten trail makes it a special customization that will be vulnerable to everything from losing file access on upgrades to losing files period if you need to perform a Windows reinstallation.

  • This technique also makes it virtually impossible to segregate custom additions from core PowerShell files. You will encounter problems with trying to use it on 64-bit Windows; there are actually two different PowerShell home directories on those systems, one under System32, and one under SysWOW64.

  • In Windows Vista, User Account Control (UAC) makes modifications to this location difficult.


All of this adds up to making a system folder unattractive as a location for storing your binaries.

Options for Accessing External Commands

Each of the following techniques might be suitable under different circumstances, although I don't use some of them. In general, the technique you use to access external commands will depend on your situation. Let's summarize the scenarios:

1. If you don't mind including a preceding [.\] whenever invoking external scripts and never change your PowerShell location, you could simply stick tools into your home directory. Dropping tools into this location might work in a pinch and doesn't require any system modifications, but it's very sloppy and error-prone.

2. If you're an administrator for a system and want to have specific tools available to all users on the machine, you can put them into PowerShell's own directory. Note: This is not considered a best practice. You're likely to lose tools during an upgrade or lose access to them if you switch between 32-bit and 64-bit versions of PowerShell.

3. For general use, I prefer to use a custom tools directory that I've added to my system path. My binary tools on my main system at home are located in c:appsbin and scripts are in c:appsbinscripts. I've added both to my search path. On a work network, you can use a shared network location that administrators automatically can map to the drive. This makes it easy to centralize tools administration.

4. For some tools, I also use aliases. For example, applications from Microsoft's Windows Software Development Kit (SDK) that I occasionally use need to be located in the same directory as specific files. To avoid having dozens of directories in my search path for these tools, I just create an alias for each one.

Whichever technique you use, it's worth your while to simplify access to tools you use every day. A few minutes spent configuring your system to find tools without extra effort will pay off on a daily basis.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like