Counting Sort is what makes this possible. This algorithm breaks the old speed barrier by not comparing things and instead using the frequency of elements. It does this by reaching linear time complexity. This article talks about how DSA counting sort works, how to apply it in DAA (Design and Analysis of Algorithms), and when to employ this “linear wonder” instead of other methods.
What is Counting Sort?
Counting Sort is a sorting algorithm that doesn’t use comparisons. It doesn’t question “Is A > B?” Instead, it asks “How many times does the value X appear?” and “How many elements are smaller than X?”
It counts how many items have different key values, and then it uses math to figure out where each object should go in the output sequence. It is very quick since it translates values to indices in an auxiliary array, but it only works with certain types of input data.
Characteristics of counting sort:
- Non-Comparison: It never compares two things in the input array.
- Linear Time: It runs in O(n + k) time, where n is the number of elements and k is the range of input.
- Stable: It keeps the relative order of elements with the same value, which is important when used as a subroutine for Radix Sort.
- Integer-Based: It works well with whole numbers or data that can be put into a specific range of whole numbers.
How Counting Sort Works?
To understand DSA counting sort, let’s sort an array where the values range from 0 to 5.
Input Array: [4, 2, 2, 8, 3, 3, 1] (Wait, 8 is outside our range! Let’s use [4, 2, 2, 5, 3, 3, 1])
Step 1: Find the Maximum
Identify the maximum value in the array (max = 5). This determines the size of our “Count Array.”
Step 2: Initialize the Count Array
Create an array of size max + 1 (indices 0 to 5) and fill it with zeros.
Count Array: [0, 0, 0, 0, 0, 0]
Step 3: Count the Frequencies
Iterate through the input array and increment the index in the Count Array corresponding to each value.
- 1 appears once \rightarrow Count[1] = 1
- 2 appears twice \rightarrow Count[2] = 2
- …and so on.
Updated Count Array: [0, 1, 2, 2, 1, 1]
Step 4: Calculate Cumulative Frequency
Modify the Count Array by adding the previous count to the current one. This tells us the ending position of each element in the sorted output.
Cumulative Count Array: [0, 1, 3, 5, 6, 7]
Step 5: Build the Output Array
Iterate through the input array (from right to left to maintain stability), find its position from the Cumulative Count Array, place it in the Output Array, and decrement the count.
Final Sorted Array: [1, 2, 2, 3, 3, 4, 5]
Counting Sort in Design and Analysis of Algorithms
When studying counting sort in DAA, we focus on its mathematical constraints. Unlike comparison sorts, counting sort relies on the distribution postulate.
Time Complexity Analysis
The algorithm consists of four distinct loops:
- Finding the maximum: O(n)
- Initializing Count Array: O(k)
- Counting frequencies: O(n)
- Calculating cumulative sums and building output: O(n + k)
Total Time Complexity: O(n + k).
- If k (the range) is O(n), the algorithm is truly linear.
- If k is much larger than n (e.g., sorting [1, 10, 10⁹]), the algorithm becomes inefficient and consumes massive memory.
Space Complexity
The space complexity is O(n + k) because we require an auxiliary Count Array of size k and an Output Array of size n. This is the “cost” of linear speed, Counting Sort trades memory for time.
Counting Sort in Python
Here is a clean implementation showing how DSA counting sort handles an array of integers.
Python
def counting_sort(arr):
if not arr:
return arr
max_val = max(arr)
min_val = min(arr)
range_of_elements = max_val – min_val + 1
# Initialize count and output arrays
count_arr = [0] * range_of_elements
output_arr = [0] * len(arr)
# Store count of each character
for i in range(len(arr)):
count_arr[arr[i] – min_val] += 1
# Change count_arr[i] so that it contains actual position
for i in range(1, len(count_arr)):
count_arr[i] += count_arr[i – 1]
# Build the output character array
for i in range(len(arr) – 1, –1, –1):
output_arr[count_arr[arr[i] – min_val] – 1] = arr[i]
count_arr[arr[i] – min_val] -= 1
return output_arr
# Usage
data = [4, 2, 2, 5, 3, 3, 1]
print(f”Sorted Array: {counting_sort(data)}“)
Also read :
- Sorting Algorithms : A Beginner’s Guide
- Sorting Algorithms : A Beginner’s Guide
- Differences Between Quick Sort And Merge Sort
- What Is Quick Sort Algorithm And How Does It Work?
- Selection Sort in C, C++, Java, Python with Examples
- DSA Python Libraries
- Top 100 DSA Interview Questions
- Data Structures and Algorithms Quiz DSA MCQ Online
Counting Sort Vs Comparison Sorts
Here is a quick difference between counting sort and comparison sort:
|
Feature |
Counting Sort | Quicksort / Merge Sort |
| Logic | Frequency Distribution |
Element Comparison |
|
Best Time |
O(n + k) | O(n log n) |
| Stability | Yes |
Quicksort (No), Merge Sort (Yes) |
|
Data Types |
Discrete (Integers, Chars) |
Any (Floats, Objects) |
|
Memory |
High (O(n+k)) |
Low (O(n) or O(log n)) |
Uses of DSA Counting Sort
Where do we see counting sort in DAA applied in the real world?
- Radix Sort Sub-routine: Counting Sort is a constant helper for Radix Sort to sort digits one at a time. It is used to put big numbers or strings in order.
- DNA Sequencing: In bioinformatics, the DNA bases (A, C, G, T) are only a small part of the whole. Counting Sort can quickly sort millions of sequences in just a few seconds.
- Image Processing: In 2026, real-time image histograms (analyzing the frequency of pixel brightness levels 0–255) use Counting Sort logic to enhance images in milliseconds.
- Voting Systems: When counting millions of votes for a small number of candidates, Counting Sort is the best approach to get the results.
When Not to Use Counting Sort?
Counting Sort is powerful but specialized. You should avoid it if:
- The Range is Huge: If you have 5 elements and one of them is 1,000,000,000, your Count Array will fill up your memory.
- Floating Point Numbers: You can’t use normal Counting Sort on numbers like 1.5 or 2.7 since the indices in an array have to be whole numbers.
- Complex Objects: If you are sorting complex objects without a discrete key, comparison-based sorts are more flexible.
Counting sort is an interesting change from how we usually think about sorting. It gets to a speed that comparison-based algorithms can only dream of. But its speed comes at the cost of memory and adaptability..
FAQs
Is Counting Sort stable?
Yes, if implemented correctly (iterating backwards through the input array when building the output). Stability is vital when the items being sorted are objects with multiple fields (like sorting students by age, then by grade).
Why is it called "Non-Comparison" sort?
Because at no point in the algorithm does the code check if (arr[i] < arr[j]). It simply looks at the value and increments a bucket.
What is the difference between Counting Sort and Bucket Sort?
Counting Sort is a specific type of Bucket Sort where each "bucket" is only for one specific value. Bucket Sort is more general and can handle floating-point numbers by putting ranges of values into buckets.
Can it sort negative numbers?
Standard implementation doesn't, but you can easily adapt it by finding the min_val and using arr[i] - min_val as the index (as shown in the Python code above).
Is O(n+k) always better than O(n log n)?
Only if k is small. If k grows exponentially compared to n (e.g., k = 2ⁿ), then O(n+k) is much worse than O(n log n).
