Pipes and Redirection: Chaining Commands to Build Processing Pipelines
Standard Output and Standard Error: stdout and stderr
Every Linux command has three data streams: stdin (0) for input, stdout (1) for normal output, and stderr (2) for error messages. By default, both stdout and stderr display on your screen, but they can be redirected independently.
ls /opt/scada/ /nonexistent/
The successful listing goes to stdout; the error about /nonexistent/ goes to stderr. Understanding this separation is key to building reliable automation on industrial servers. You might want to save output to a log while still seeing errors on screen, or discard routine messages while capturing only failures.
Redirection: >, >>, and 2>
ls /opt/scada/ > filelist.txt # Overwrite file with stdout
ls /opt/scada/ >> filelist.txt # Append to file
find / -name "modbus.conf" 2> /dev/null # Discard errors
find / -name "modbus.conf" 2> errors.log # Save errors separately
find / -name "modbus.conf" > results.txt 2>&1 # Both streams to same file
find / -name "modbus.conf" &> all_output.txt # Shorthand for same
On production servers, always use >> for log files so previous data is preserved. The /dev/null special device discards anything written to it -- it is the Linux black hole.
# Separate output and errors for automated scripts
./collect_sensors.sh >> /var/log/sensors/daily.csv 2>> /var/log/sensors/errors.log
Pipes: Connecting One Command's Output to Another's Input
The pipe (|) takes stdout from one command and feeds it as stdin to the next. This is the single most powerful concept in the Linux command line.
ls /opt/scada/data/ | grep "\.csv$" | wc -l # Count CSV files
du -sh /var/log/* | sort -rh | head -5 # Top 5 largest logs
grep "ERROR" /var/log/scada.log | awk '{print $5}' | sort | uniq -c | sort -rn
Each command does one thing well; pipes connect them into powerful chains. This is the Unix philosophy: small tools combined to solve big problems.
# Multi-stage pipeline with line continuation
cat /var/log/sensors/temp.csv | \
grep "ALARM" | \
awk -F',' '{print $2}' | \
sort | uniq -c | sort -rn | head -3
tee: Writing to File and Screen Simultaneously
./run_diagnostics.sh | tee diagnostics.log # Display and save
./run_diagnostics.sh | tee -a diagnostics.log # Append mode
Use tee inside pipelines to save intermediate results:
grep "CRITICAL" /var/log/scada.log | \
tee /tmp/critical_events.log | \
awk '{print $1, $2}' | sort | uniq -c
This saves all CRITICAL events to a file while continuing the pipeline for further processing.
xargs: Converting Output Into Arguments
Some commands do not read from stdin -- they expect filenames or values as arguments. xargs bridges this gap by converting piped input into command-line arguments.
find /tmp -name "*.tmp" | xargs rm # Delete found files
find /var/log -name "*.log" -size +10M | xargs ls -lh # Size of large logs
find /opt/scada/data/ -name "*.csv" -print0 | xargs -0 wc -l # Handle spaces in names
find /opt/scada/config/ -name "*.yaml" | xargs -I{} cp {} /backup/ # Copy each file
find /data -name "*.gz" | xargs -P 4 gunzip # Parallel execution
The -P flag enables parallel execution, dramatically speeding up batch operations on multi-core servers.
Practical Example: A Pipeline for Analyzing a Daily Production Log
Given /var/log/production/2026-04-15.log:
2026-04-15T08:00:01 LINE-A UNIT-01 PRODUCE 150 units OK
2026-04-15T08:00:02 LINE-B UNIT-01 PRODUCE 0 units FAULT
LOG="/var/log/production/2026-04-15.log"
# Total production
awk '{sum += $5} END {printf "Total: %d units\n", sum}' "$LOG"
# Production per line
awk '{a[$2]+=$5} END {for(k in a) print k, a[k]}' "$LOG" | sort
# Count and save faults
grep "FAULT" "$LOG" | tee /tmp/faults.log | wc -l | xargs -I{} echo "Faults today: {}"
# Fault rate per unit
grep "FAULT" "$LOG" | awk '{print $2, $3}' | sort | uniq -c | sort -rn
This pipeline processes millions of log lines in seconds, producing actionable reports that help factory managers identify problem areas immediately.
Summary
In this lesson you learned how to connect commands and control data flow:
- Every command has stdin, stdout, and stderr that can be redirected independently.
>overwrites,>>appends;2>redirects errors;&>captures both streams.- Pipes (
|) chain commands, passing stdout to stdin. teewrites to file and screen simultaneously, useful for debugging pipelines.xargsconverts piped input into arguments with support for parallel execution.- Complex production log analysis can be done in a single pipeline combining multiple tools.
In the next lesson, you will learn about Linux processes: how to monitor, manage, and stop runaway programs on industrial servers.