8 Linux Tools IT Operations Engineers Should Master

Some are tried-and-true and others are newer, but all eight of these Linux tools should be in every ITOps engineer's tool belt.

Christopher Tozzi, Technology analyst

December 7, 2022

5 Min Read
Tux penguin, emblem of the Linux operating system, on a computer keyboard
Alamy

Which Linux tools are the most important for IT operations work? That depends on exactly which type of ITOps work you're talking about, of course. ITOps teams that manage cloud-based microservices workloads need to master a somewhat different set of Linux tools from those who work with on-premises monolithic applications, for instance.

Still, in general, there are a core set of Linux tools that every IT operations engineer should know. Here's a look at the top eight such tools. Some are tried-and-true utilities that have been around for decades. Others are newer, but are growing increasingly important to IT operations work.

1. Tcpdump

Want to know what's happening on the network? Tcpdump, which is installed on most Linux distributions by default, is a handy way to find out. The tool collects packets as they flow across a network interface. IT operations engineers can then inspect the packets to identify information like source IP address and the protocol used. For example:

Tcpdump.png

Tcpdump

Tcpdump isn't the best solution for advanced network traffic analysis (for that, more complex tools, like Wireshark, tend to work better). But for ITOps teams that need a fast and easy way of seeing what's happening on the network, tcpdump is the go-to Linux tool.

2. Nmap

Related:8 Essential IT Operations Tools for Today's IT Pro

Tcpdump shows which traffic is flowing on your network, but it doesn't display information about the network itself.

To gain the latter insight, you'll want to use nmap, a Linux tool that displays information about how your local network is organized, as well as data like which ports are open and even which operating system different servers are running.

3. Ps

Ps is one of the simplest, but also most important, Linux tools. It lists running processes, and it can optionally provide some details about them. It's useful when you need to figure out if a process is still running, or troubleshoot why it has stopped responding. In modern environments, ps is particularly valuable for tracking down the reasons why containers have failed to start or have stopped running.

4. Top

Ps is great if you want to check the status of a particular process at a given point in time. But what if you want a dynamic, continuously updated look into the state of your system's processes? In that case, top is your friend.

Top.png

Top

Top displays a list of processes, along with information like the process owner and how many resources it's consuming. The list is updated in real time.

A limitation of top is that, by default, it lists processes in order of how many resources they are consuming (hence the name "top" — it lists top processes in terms of resource consumption). It's not particularly useful if you need to check in on a process that is not resource-intensive. But if you need to figure out which processes are eating up all your CPU or memory, top is a fast way to do it.

Related:IT Operations Engineer Salary: What to Expect, and How to Earn More

5. Df

Top and ps can display the memory and CPU utilization for each process, but they don't provide insight into storage consumption. For that, you'll want to use a tool like df, which shows how much storage is in use by various file systems.

Df.png

Df

Pro tip: Pass the -h argument to diff to tell it to display storage in megabytes and gigabytes, which are more readable than the default.

6. Docker

Today, the Docker cli tool — which you can call by typing docker in the terminal of most Linux distributions — has ceased to become very important for running containers in production. Most ITOps teams instead use a solution like Kubernetes, which deploys containers without requiring each one to be started or managed on the command like using Docker.

That said, the docker command still comes in handy if you want to test a container or launch a containerized application on a one-off basis. It's therefore still worth getting to know how to use the Docker CLI interface to start, stop, and manage containers.

7. Bcc

Bcc is a toolkit for running Linux programs using eBPF, the amazing technology that makes it possible to run programs directly in the Linux kernel.

Bcc isn't installed by default on most Linux distributions, but it's available through package managers, or you can install it from GitHub.

Once you install bcc, you need to deploy eBPF programs for it to run; bcc on its own merely provides a way to interact with eBPF. It's also worth noting that bcc is not the only way to take advantage of eBPF; many observability and security tools are now integrating with eBPF on the back end to provide eBPF-based functionality that admins don't have to set up themselves.

Still, if you want a simple way of leveraging eBPF directly from the command line, bcc is the Linux tool to know.

8. History

Ever find yourself knowing that you ran a certain command a few days ago that you want to run again, but you can't remember what the command was? The Linux tool history will help you figure it out. History displays a list of previous commands that you've run in the terminal.

A limitation is that history lists command histories on a user-by-user basis, so if you ran a command as root but you're currently logged in as a different user, you'll need to switch to root to find the right command history.

Conclusion

Again, the most important Linux tools to know vary depending on your use cases. But by and large, almost every IT operations engineer today should have an understanding of the core Linux utilities described above, which play a central role in administering Linux systems and the applications running on them.

About the Author(s)

Christopher Tozzi

Technology analyst, Fixate.IO

Christopher Tozzi is a technology analyst with subject matter expertise in cloud computing, application development, open source software, virtualization, containers and more. He also lectures at a major university in the Albany, New York, area. His book, “For Fun and Profit: A History of the Free and Open Source Software Revolution,” was published by MIT Press.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like