Cisco SDWAN Demo
Cisco SDWAN Demo
Expert-Led Live Training | 5th April 2025 @6 PM IST
Day
Hr
Min
Sec
Join Now
USD ($)
$
United States Dollar
India Rupee

Linux Interview Question and Answers

Created by Gautam Sharma in Articles 20 Mar 2025
Share
«DevOps Roadmap for Beginners in 2025

Preparing for a Linux interview can be both exciting and challenging, especially with the wide range of concepts and commands that Linux entails.

As one of the most widely used operating systems, especially in server environments and development spaces, Linux proficiency is a highly valued skill in many technical roles, from system administrators to DevOps engineers.

In this guide, we have compiled a list of most frequently asked Linux interview questions and answers. We have covered Linux interview questions for beginners to experienced candidates. We also have 1 special section for Linux interview questions for DevOps.

You can also check out our Linux training courses, to learn the fundamentals and advanced topics of Linux while also preparing for Linux certifications.

Linux Interview Questions for Freshers  

Here are the Linux interview questions with answers for freshers with 0 to 1 year of experience. We have included common Linux commands interview questions in this section.

1. What is the purpose of the `ls` command in Linux? 

The `ls` command is used to list files and directories within a directory. By default, it displays the contents of the current directory. It has several options, such as `-l` for a long list format that shows permissions, owner, size, and modification date, and `-a` to display hidden files (those beginning with a dot). For example, `ls -la` combines both options to list all files, including hidden ones, with detailed information. This command is fundamental for navigation and directory structure understanding in Linux. 

2. How does the `cd` command work, and what are some common uses? 

The `cd` (change directory) command allows you to move between directories. For example, `cd /path/to/directory` moves you to the specified path. Using `cd ..` takes you up one directory level, while `cd ~` or simply `cd` brings you back to your home directory. It’s a simple yet powerful tool for navigating the file system in Linux, essential for any operation that requires moving between folders. 

3. Explain the `pwd` command and its importance. 

The `pwd` (print working directory) command displays the full path of the current directory. When working with multiple files and directories, it’s essential to keep track of your location in the file system, and `pwd` helps clarify your exact position at any moment. This is particularly useful in scripts or when working with multiple terminals, as it prevents accidental modifications to files in unintended directories. 


RHCSA Linux Certification TrainingTrain online for the RHCSA certification exam.Explore course
custom banner static image

4. What does the `cp` command do, and how can you use it effectively? 

The `cp` command is used to copy files or directories. For example, `cp file1.txt /new/location/` copies `file1.txt` to the specified directory. To copy directories, the `-r` (recursive) option is used, as in `cp -r dir1 /new/location/`. Additionally, `cp -i` prompts before overwriting existing files, which is useful to prevent accidental data loss. Knowing how to use `cp` efficiently is key in managing file transfers and backups. 

5. How is the `mv` command used in Linux? 

The `mv` command is used to move or rename files and directories. For example, `mv file1.txt /new/location/` moves `file1.txt` to a different directory, while `mv oldname.txt newname.txt` renames a file within the same directory. The `mv` command is particularly useful for organizing files, renaming them, or moving them to different locations without duplicating the data. 

6. What is an inode? 

In Linux, the inode stores all the information about a file except its name and data. The inode number is generated when a file is created. It is a unique number created and assigned by Linux. The inode number and filename are stored separately in the directory that contains the files. The inode will contain the metadata about the file it is associated with. Even if the file gets corrupted, the inode and filename are available in the lost and found directory in the same partition. 

7. What are the characteristics of Softlinks? 

Softlinks are also known as symbolic links. Softlinks are files that create a link to another file, just like a shortcut used in Windows. They can access and create files across file systems. They can link directories and have different inode numbers from the source file. The Softlink file will exist even if the destination file is deleted, though it will be of no use since the link inside it will not work. 

8. What are the differences between Cron and Anacron? 

Cron is a job scheduled to be executed at a later time in a Linux server. One example could be scheduled backups of log files or databases. Anacron is Cron, which is used on a workstation or client that is not expected to be always on or connected to a network. While Cron can be set on minute-basis, Anacron can be set on a day-basis at the minimum level. Any user can set the Cron, but Anacron can be set only by a super user. Since Cron is set on servers that are up all the time, it will be executed at the time scheduled. Anacron does not expect the machine to be set all the time and hence is executed whenever the system is up and available. 

9. Explain the different types of Network bonding in Linux. 

Network bonding combines two or more network routers to offer the best networking bandwidth and speed. The following 7 types of Network bonding are used in Linux: 

a) balance-rr: This is the default mode based on the round robin policy. Its main features are fault tolerance and load balancing.  

b) active-backup: This is based on active-backup policy. It is fault tolerant. Only the network adapter knows its MAC address and it allows only one active slave at a time.  

Another one will become active when one fails.  

c) balance-xor: This sets an XOR mode by adding the XOR address of the source to the destination address. It is also fault tolerant and load balancing.  

d) Broadcast: Broadcast mode transmits the whole lot on all slaves.  

e) 802.3ad: This mode requires a switch supporting the IEEE 802.3ad dynamic link. It is called Dynamic Link Aggregation, helps in creating groups of slaves that have equal speed. 

f) balance-tlb: This Adaptive transmit load balancing mode distributes the load based on the current load of the slaves and the incoming load traffic. When one of the slaves fails, another is loaded instead.  

g) balance-alb: This mode is also known as Adaptive load balancing and does not require a specific switch support.  

10. Explain the Zombie process. 

The Zombie process, also called a defunct process, is one that is already executed and still waiting in the process table. These are the child processes that are waiting for the parent to execute the system wait call. Remember that the zombie processes will not be killed by the kill call. It remains in the process table till the parent explicitly executes that system wait call. 

Intermediate Linux Interview Questions and Answers  

These are the intermediate Linux interview questions with answers for professionals with 3-5 years of experience. 

11. What is the purpose of the `find` command, and how do you use it effectively? 

The `find` command is used to search for files and directories within a directory hierarchy based on criteria like name, type, size, or modification time. For example, `find /home/user -name ".txt"` finds all `.txt` files in the `/home/user` directory. You can also use `-type` to specify file types (e.g., `-type d` for directories), or `-mtime` to search by modification time (e.g., `find . -mtime -7` finds files modified within the last 7 days). `find` is essential for managing files across large file systems efficiently. 

12. Explain the `tar` command and how it is commonly used. 

The `tar` command archives multiple files into a single file, often for backup or transfer. For example, `tar -cvf archive.tar file1 file2` creates an archive of `file1` and `file2`. The `-c` option creates the archive, `-v` displays progress, and `-f` specifies the filename. To compress with gzip, use `tar -czvf archive.tar.gz folder/`. Similarly, `-x` extracts files (e.g., `tar -xzvf archive.tar.gz` extracts a compressed archive). `tar` is invaluable for archiving, compressing, and transferring files efficiently. 

13. What is the `top` command, and how do you interpret its output? 

The `top` command provides a dynamic, real-time view of system resource usage. It displays processes, CPU and memory usage, load averages, and other statistics. Important columns include `%CPU` for CPU usage, `%MEM` for memory usage, `TIME+` for total CPU time, and `PID` for process ID. Users can sort columns or filter processes interactively. `top` is often used for performance monitoring and to identify resource-heavy processes, helping diagnose performance issues. 

14. How do you use the `df` and `du` commands, and what is the difference between them? 

The `df` command displays the disk space usage of file systems. For example, `df -h` provides a human-readable view, showing used and available space across mounted file systems. In contrast, `du` reports disk usage at the file and directory level. `du -sh /path/to/directory` shows the total space used by a directory, with `-s` summarizing the output and `-h` formatting it human-readably. `df` is used for an overview of disk space, while `du` is helpful for pinpointing space usage within directories. 

15. Describe how the `kill` and `killall` commands work. 

The `kill` command terminates a process by sending it a signal, typically `SIGTERM` (signal 15) or `SIGKILL` (signal 9). To terminate a process, use `kill -9 PID` where `PID` is the process ID. The `killall` command, however, terminates all processes with a specified name, e.g., `killall firefox` closes all instances of Firefox. While `kill` is useful for handling individual processes, `killall` is efficient for managing multiple processes of the same application or service. 

16. Explain the differences between BASH and DOS. 

BASH is used in Linux to execute commands. BASH commands are case-sensitive while in DOS the commands are not case-sensitive. In BASH/, the directory separator while in DOS/, the argument delimiter. In BASH\ is the escape character while in DOS\ is the directory separator. DOS follows a strict naming convention which requires an 8-character filename with a dot and 3-character file extension which denotes the filetype. BASH does not have any specific file-naming convention. 

17. Explain Daemon. 

Daemons offer special functionalities that are not available in the operating system by default. It stays active and listens to every service request and activates the required services that are not offered by the operating system. Once the said service gets done, it remains dormant and waits for the next service. 

18. What is Puppet Server? 

A puppet server is an open-source software used to push configuration information to clients known as puppets. It is an IT automation software that can do a wide range of activities such as checking the file permissions, installing new applications, and utilities. You can use it to pass on initial parameters to launch specific software and to trace it and make it function for your business. 

19. Explain the following: 

a) awk 

b) sed 

c) rsync 

Awk is a scripting language used to create small programs for data searching and manipulation. It does not require compiling and offers extensive string, numeric, and logical manipulations and functions. It is best used for pattern-scanning.  

Sed command is used for string manipulation in Linux. It is used for text stream manipulation and extensively used for find & replace of text and complex pattern-matching.  

Rsync command is used for remote syncing of files. You can use this to copy files faster locally and remotely. It does not support copying files between two remote hosts though.  

20. Explain about the Kernel and what it can provide. 

Kernel refers to a modular operating system. Kernel has modules which could be loaded dynamically during runtime, and it could extend its capabilities. A Kernel module could provide the following: 

a) A Device Driver: This adds Support for New Hardware 

b) File System Support: NFS (Network Files System) or BTRFS (Modern Copy On Write File System) 

Kernel modules have parameters through which we could customize the behavior of the system. The Kernel modules could be loaded or unloaded as and when required. 

Check out our other Networking Training courses, or Contact Learner Advisors.


banner image

Linux Interview Questions for Experienced

The following are a few advanced Linux interview questions with answers for people with 5+ years of experience. 

21. Explain how `iptables` works and how it is used for network security. 

`iptables` is a command-line firewall utility that controls network traffic by defining rules for incoming and outgoing packets based on IP addresses, protocols, and ports. It works by categorizing rules into chains (`INPUT`, `OUTPUT`, `FORWARD`), each controlling traffic flow through the system. For example, to allow SSH access, you might use `iptables -A INPUT -p tcp --dport 22 -j ACCEPT`. With `iptables`, administrators can set policies, block specific IPs, or enable/disable access to services. It is fundamental for securing Linux servers by controlling and monitoring network access. 

22. What is LVM, and how would you extend an existing logical volume? 

Logical Volume Manager (LVM) is a system for managing disk storage that allows for flexible disk resizing. To extend an existing logical volume, you first add more physical space (if needed), then use `lvextend` and `resize2fs` (for ext2/3/4 filesystems). For example, to extend a logical volume by 10GB, you might run `lvextend -L +10G /dev/vgname/lvname`, then `resize2fs /dev/vgname/lvname` to apply the new size. LVM is powerful for managing dynamic storage needs, allowing for resizing, snapshotting, and improved storage flexibility. 

23. Describe the purpose of `strace` and a situation where it might be used. 

`strace` is a debugging tool that traces system calls and signals made by a process. It’s commonly used to diagnose issues with a program by revealing interactions with the kernel, such as file access, network calls, or memory allocation. For example, `strace -p PID` attaches to a running process to observe its system calls. `strace` is valuable when debugging permission issues, troubleshooting dependencies, or identifying why a program hangs, fails, or misbehaves due to system-level interactions. 

24. What is SELinux, and how does it enhance security? 

Security-Enhanced Linux (SELinux) is a security architecture integrated into the kernel that enforces access control policies. Unlike traditional discretionary access control (DAC), SELinux uses mandatory access control (MAC), applying strict security policies based on the principle of least privilege. This restricts processes to only required permissions, reducing the impact of compromised applications. For example, SELinux policies define which files a web server can access, blocking unauthorized access attempts. It’s essential in high-security environments, providing layered security against vulnerabilities. 

25. How would you troubleshoot and resolve a “Too many open files” error in Linux? 

The "Too many open files" error typically occurs when a process exceeds the file descriptor limit set by the system. To diagnose, you can use `ulimit -n` to view the current limit and `lsof | wc -l` to see the number of open files. To temporarily increase the limit, use `ulimit -n 4096`. For a permanent change, edit `/etc/security/limits.conf` and add lines like `username soft nofile 4096` and `username hard nofile 8192`. This error often requires adjusting system limits or optimizing applications to handle file descriptors better. 

26. What is the purpose of `systemd`, and how does it manage services in Linux? 

`systemd` is an init system and service manager for modern Linux distributions, responsible for booting and managing services, logging, and handling dependencies. It replaces older init systems like `SysVinit` and allows for parallel service startup, reducing boot time. To manage services, commands include `systemctl start servicename`, `systemctl stop servicename`, and `systemctl status servicename`. `systemd` also supports unit files that define services, targets, and sockets, making it an advanced and flexible system for process and service management. 

27. Explain the use of `rsync` and how you would set up a basic remote file backup. 

`rsync` is a powerful tool for file synchronization and backup. It copies files and directories between locations, only transferring differences to reduce data transfer. To back up files remotely, you might use `rsync -avz /source/ user@remote:/destination/`, where `-a` preserves permissions, `-v` enables verbose output, and `-z` compresses data during transfer. `rsync` also supports incremental backups, making it efficient for regular backups, even over slow networks. 

28. How would you create and manage a Docker container in Linux? 

Docker containers isolate applications in lightweight, portable environments. To create a container, start with `docker run -d --name container_name image_name`, which pulls the specified image and runs it in detached mode. Use `docker ps` to view running containers and `docker exec -it container_name bash` to enter an interactive shell. Containers can be managed with commands like `docker stop` to stop and `docker rm` to remove them. Docker is essential for creating reproducible environments, often used in development and deployment pipelines. 

29. What is `TCPdump`, and how would you use it to monitor network traffic on a specific port? 

`TCPdump` is a network packet analyzer that captures and displays packets transmitted over a network. To monitor traffic on a specific port, you might use `tcpdump -i eth0 port 80`, where `-i` specifies the network interface and `port 80` filters traffic for port 80 (HTTP). `TCPdump` is useful for debugging network issues, analyzing suspicious traffic, or troubleshooting connectivity problems, giving insight into data flow at the packet level. 

30. Explain how you would automate tasks using `bash` scripting and give an example of a common use case. 

`bash` scripting automates repetitive tasks by running sequences of commands. Scripts can include variables, loops, conditionals, and functions to perform complex tasks. For example, a script to back up files might include commands to compress and move data to a backup directory, adding timestamps to filenames for organization. Here’s a simple example of a backup script: 


   !/bin/bash 

   BACKUP_DIR="/backup" 

   SOURCE_DIR="/data" 

   TIMESTAMP=$(date +"%Y%m%d") 

   tar -czf "$BACKUP_DIR/backup_$TIMESTAMP.tar.gz" "$SOURCE_DIR" 

Linux interview questions for DevOps 

This is a unique section of Linux interview questions and answers guide as it covers linux interview questions for DevOps roles, focusing on automation, containerization, infrastructure as code, CI/CD integration, and Kubernetes management on Linux systems. 

31. What is the purpose of `Ansible`, and how does it work in a DevOps environment? 

Ansible is an open-source automation tool used for configuration management, application deployment, and orchestration. It operates over SSH and uses YAML-based playbooks to define tasks, which makes it agentless and easy to set up. In a DevOps environment, Ansible is commonly used for automating infrastructure provisioning, managing server configurations, and deploying applications in a consistent manner. For example, you can write an Ansible playbook to configure servers across a fleet with specific security settings, reducing manual configuration errors and ensuring consistency. 

32. How does `Docker Compose` help manage multi-container applications, and can you give an example of its usage? 

Docker Compose is a tool used to define and run multi-container Docker applications using a `docker-compose.yml` file. It allows you to configure services, networks, and volumes in a single file, making it easier to manage complex applications. For example, in a typical web application, you can define the web server, database, and caching service all in one Compose file and then start everything with `docker-compose up`. This simplifies application setup and ensures that each component can communicate seamlessly within the same network. 

33. What is a CI/CD pipeline, and how would you set one up using Linux commands? 

A CI/CD pipeline automates the steps of integrating and deploying code changes to production. It includes stages like code integration, testing, and deployment. On Linux, you could create a simple pipeline using `cron` jobs, shell scripts, and Git. For example: 

- A `cron` job might check the Git repository for changes. 

- A shell script could pull updates, run tests, build the application (e.g., using `make`), and deploy it. 

Most pipelines, however, are set up using dedicated CI/CD tools like Jenkins, GitLab CI, or CircleCI, which provide more robust automation features and integration capabilities. 

34. How would you monitor a Linux server’s performance in a DevOps context? 

Performance monitoring is crucial in DevOps for maintaining system health. Tools like `top`, `htop`, `vmstat`, and `iostat` can give real-time insights into CPU, memory, and I/O usage. For comprehensive monitoring, many DevOps teams use Prometheus and Grafana to collect and visualize metrics, enabling alerts and historical tracking. Setting up Prometheus with Node Exporter on each Linux server allows you to capture essential metrics, which Grafana can then visualize in dashboards for ongoing monitoring. 

35. Describe how you would set up a reverse proxy with Nginx in Linux. 

A reverse proxy forwards client requests to backend servers, often used for load balancing, SSL termination, and caching. To set up Nginx as a reverse proxy, you would install Nginx and modify the configuration file (e.g., `/etc/nginx/nginx.conf` or a file in `/etc/nginx/conf.d/`). For example: 


nginx 

   server { 

       listen 80; 

       server_name example.com; 

       location / { 

           proxy_pass http://backend_server:8080; 

           proxy_set_header Host $host; 

           proxy_set_header X-Real-IP $remote_addr; 

           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 

       } 

   } 

This configuration routes traffic coming to `example.com` through the Nginx server to `backend_server` on port 8080. 

36. How would you handle secrets management in a DevOps environment on Linux? 

In DevOps, managing secrets securely is essential. Common approaches include: 

- Environment Variables: Load secrets from environment files but limit access and use secure permissions. 

- Secret Management Tools: Tools like HashiCorp Vault, AWS Secrets Manager, or GCP Secret Manager securely store and retrieve secrets. Vault can be used in Linux environments with commands like `vault kv put secret/path key=value` to securely store secrets. 

- Kubernetes Secrets: In Kubernetes, secrets can be stored as Kubernetes objects and mounted as volumes or injected as environment variables, restricting access based on RBAC policies. 

7. What is a `Load Balancer`, and how would you set one up on a Linux server using HAProxy? 

A load balancer distributes incoming traffic across multiple backend servers, ensuring high availability and scalability. HAProxy is a popular open-source load balancer. To set it up, you would install HAProxy and edit its configuration file, typically found at `/etc/haproxy/haproxy.cfg`. Here’s a basic setup: 


haproxy 

   global 

       log /dev/log local0 

       maxconn 4096 

   defaults 

       log global 

       mode http 

       timeout connect 5000ms 

       timeout client 50000ms 

       timeout server 50000ms 

   frontend http_front 

       bind :80 

       default_backend servers 

   backend servers 

       balance roundrobin 

       server server1 192.168.1.10:80 check 

       server server2 192.168.1.11:80 check

This configuration distributes incoming HTTP traffic between `server1` and `server2` using round-robin load balancing. 

38. How would you automate infrastructure provisioning on Linux with Terraform? 

Terraform automates infrastructure setup by using declarative configuration files. To provision infrastructure: 

- First, define resources in a `.tf` file. For example, if provisioning an EC2 instance, you might define it as: 

 hcl 

     provider "aws" { 

       region = "us-west-2" 

     } 

     resource "aws_instance" "example" { 

       ami           = "ami-123456" 

       instance_type = "t2.micro" 

     } 

- Run `terraform init` to initialize the environment. 

- Execute `terraform apply` to create the resources based on the configuration. 

Terraform is ideal for managing infrastructure as code (IaC), ensuring consistency, version control, and automation across environments. 

39. Explain how `Jenkins` can be integrated with a Linux server to automate deployment. 

Jenkins is a continuous integration/continuous deployment (CI/CD) tool that automates software building, testing, and deployment. To integrate Jenkins with Linux: 

- Install Jenkins and set it up to run as a service. 

- Use SSH to deploy applications to other Linux servers, often by configuring SSH keys for passwordless authentication. 

- Create a Jenkins pipeline with stages for building, testing, and deploying code. For example, a pipeline might use the following Groovy syntax: 


groovy 

     pipeline { 

       agent any 

       stages { 

         stage('Build') { 

           steps { 

             sh 'make build' 

           } 

         } 

         stage('Deploy') { 

           steps { 

             sh 'scp -r app/ user@remote_server:/path/to/deploy' 

             sh 'ssh user@remote_server "systemctl restart app.service"' 

           } 

         } 

       } 

     } 

Jenkins automates the entire deployment process, reducing manual intervention and ensuring consistent application deployments. 

40. How would you configure a Kubernetes cluster on a set of Linux servers? 

To set up a Kubernetes cluster on Linux: 

- Start by installing `kubeadm`, `kubelet`, and `kubectl` on each server. 

- Initialize the control plane node using `kubeadm init --pod-network-cidr=192.168.0.0/16`. 

- Configure `kubectl` access by copying the kubeconfig file: `mkdir -p $HOME/.kube && cp /etc/kubernetes/admin.conf $HOME/.kube/config`. 

- On worker nodes, use the join command provided after initialization to join them to the cluster. 

- Deploy a network plugin like Calico or Flannel to enable pod communication. 

This setup creates a basic Kubernetes cluster, ready for container orchestration, allowing DevOps teams to manage and scale applications across nodes.

Conclusion 

In conclusion, mastering Linux is essential for anyone pursuing a career in IT, software development, or systems administration. The interview questions covered in this article highlight key concepts, tools, and best practices that are crucial for understanding Linux environments.

By preparing for these questions, candidates can demonstrate their proficiency and problem-solving abilities, setting themselves apart in competitive job markets.

Whether you're a beginner or an experienced professional, continuous learning and hands-on practice with Linux will help you stay ahead and thrive in your career. 

DevOps Interview Questions and Answers»
Gautam Sharma

This is Gautam's biography

More... | Author`s Bog | Book a Meeting

Related Articles

#Explore latest news and articles

Top CCNA Interview Questions and Answers [Latest 2025] 18 Dec 2024

Top CCNA Interview Questions and Answers [Latest 2025]

Practice with the top 50 CCNA interview questions and find the correct way to answer the questions. We have CCNA interview questions and answers for beginners ...
65+ Network Security Interviews Questions for Freshers and Experienced 4 Jan 2025

65+ Network Security Interviews Questions for Freshers and Experienced

Check out the frequently asked network security interview questions with the right answers. Also, read the tips for clearing any network security interview.
Firewall Interview Questions and Answers [Updated for 2025] 31 Dec 2024

Firewall Interview Questions and Answers [Updated for 2025]

Prepare for your next job interview with our guide on network firewall interview questions and answers. Suitable for beginners to advanced professionals.

FAQ

Basic Linux knowledge includes understanding the kernel, shell, file system, and common commands like cd, ls, and mkdir.
Prepare by practicing common commands, understanding Linux architecture, and being familiar with distributions like Ubuntu and Red Hat.
Linux is an open-source, UNIX-like operating system that manages hardware resources and supports multitasking.
Linux is used for its flexibility, security, and cost-effectiveness across various devices and applications.
Basic commands include cd, ls, mkdir, rm, and cp, which are used for navigation and file management.

Comments (0)

Gautam Sharma

Gautam Sharma

Network Support Engineer and Instructor Operations
★★★★★ 4.97
Faithful User
Expert Vendor
Golden Classes
King Seller
Fantastic Support
Loyal Writer
+91 8383 96 16 46

Enquire Now

Captcha
Share to your friends

Share

Share this post with others

Contact learning advisor

Captcha image