Improved vial c. Implementation of bubble sort in Java. Using Template

It has been estimated that up to a quarter of the time of centralized computers is spent sorting data. This is because it is much easier to find a value in an array that has been pre-sorted. Otherwise, the search is a bit like looking for a needle in a haystack.

There are programmers who spend all their working time studying and implementing sorting algorithms. This is because the vast majority of business software involves database management. People search databases for information all the time. This means that search algorithms are in high demand.

But there is one "but". Search algorithms work much faster with databases that are already sorted. In this case, only a linear search is required.

While the computers are without users at some points in time, the sorting algorithms continue to operate on the databases. Searchers come again, and the database is already sorted based on one or another search purpose.

This article provides examples of implementations of standard sorting algorithms.

Selection sort

To sort an array in ascending order, you must find the element with the largest value at each iteration. With it you need to swap the last element. The next element with the highest value is placed in second to last place. This should happen until the elements in the first places in the array are in the proper order.

C++ code

void SortAlgo::selectionSort(int data, int lenD) ( int j = 0; int tmp = 0; for (int i=0; i data[k])( j = k; ) ) tmp = data[i]; data[i] = data[j]; data[j] = tmp; ) )

Bubble sort

Bubble sort compares adjacent elements and swaps places if the next element is smaller than the previous one. Multiple passes through the data are required. During the first pass, the first two elements in the array are compared. If they are not in order, they are swapped and then the elements in the next pair are compared. Under the same condition, they also change places. Thus, sorting occurs in each cycle until the end of the array is reached.

C++ code

void SortAlgo::bubbleSort(int data, int lenD) ( int tmp = 0; for (int i = 0;i =(i+1);j--)( if (data[j]

Insertion sort

Insertion sort splits the array into two regions: ordered and unordered. Initially, the entire array is an unordered region. On the first pass, the first element from the unordered region is removed and placed in the correct position in the ordered region.

On each pass, the size of the ordered region increases by 1, and the size of the disordered region decreases by 1.

The main loop runs from 1 to N-1. At the jth iteration, element [i] is inserted into the correct position in the ordered region. This is done by shifting all elements of the ordered region that are greater than [i] one position to the right. [i] is inserted into the space between those elements that are less than [i] and those that are greater than [i].

C++ code

void SortAlgo::insertionSort(int data, int lenD) ( int key = 0; int i = 0; for (int j = 1;j =0 && data[i]>key)( data = data[i]; i = i-1; data=key; ) ) )

Merge sort

C++ code

void SortAlgo::mergeSort(int data, int lenD) ( if (lenD>1)( int middle = lenD/2; int rem = lenD-middle; int * L = new int; int * R = new int; for ( int i=0;i

Quick sort

Quicksort uses a divide and conquer algorithm. It begins by splitting the original array into two regions. These parts are to the left and right of the marked element, called the support. At the end of the process, one part will contain elements smaller than the reference, and the other part will contain elements larger than the reference.

C++ code

void SortAlgo::quickSort(int * data, int const len) ( int const lenD = len; int pivot = 0; int ind = lenD/2; int i,j = 0,k = 0; if (lenD>1) ( int * L = new int ; int * R = new int ; pivot = data; for (i=0;i

Hi all!

Today we will look at bubble sorting. This algorithm is often taught in schools and universities, so we will use Pascal. So, what is sorting? Sorting is the ordering of elements from smallest to largest (ascending sort) or from largest element to smallest (descending sort). Arrays are usually sorted.

There are various sorting algorithms. Some are good at sorting a large number of elements, others are more efficient with a very small number of elements. Our bubble method is characterized by:


Pros:
  • Ease of implementation of the algorithm
  • Beautiful name
Minuses:
  • One of the slowest sorting methods (Execution time depends quadratically on the length of the array n 2)
  • Almost not used in real life (used mainly for educational purposes)
Let us have a certain array: 3 1 4 2

Algorithm: We take an element of the array, compare it with the next one, if our element is greater than the next element, then we swap them. After going through the entire array, we can be sure that the maximum element will be “pushed out” - and will be the very last one. Thus, one element is already exactly in its place. Because we need to put them all in their places, therefore, we must repeat this operation as many times as we have array elements minus 1. The last element will stand automatically if the rest are in their places.

Let's return to our array: 3 1 4 2
We take the first element “3” and compare it with the next “1”. Because "3" > "1", then swap places:
1 3 4 2
Now we compare “3” and “4”, three is not more than four, which means we do nothing. Next, compare “4” and “2”. Four is more than two - so we change places: 1 3 2 4. The cycle is over. This means that the largest element should already be in place!! We see that this is what happened here. Wherever “4” (our largest element) is located, it will still be the last one after the loop has traversed the entire array. An analogy - just as an air bubble floats up in water, so does our element float up in an array. That's why the algorithm is called "Bubble Sort". To position the next element, it is necessary to start the cycle all over again, but the last element can no longer be considered, because it is in its place.


We compare “1” and “3” - we don’t change anything.
We compare “3” and “2” - Three is more than two, which means we change places. It turns out: 1 2 3 4 . The second cycle is completed. We have already completed two cycles, which means we can say with confidence that our last two elements have already been sorted. All we have to do is sort the third element, and the fourth will fall into the right place automatically. Once again, we compare the first element and the second - we see that we already have everything in its place, which means that the array can be considered sorted in ascending order of elements.

Now all that remains is to program this algorithm in Pascal. const n = 4; (We set up a constant, this will be the length of the array) var i, j, k:integer; (Two variables for the nested loop, one for swapping elements) m:array of integer; (We create an array) begin (We will request the array from the keyboard:) Writeln("Enter an array:"); for i:=1 to n do begin Writeln(i, " element:"); Readln(m[i]); end; (The outer loop is responsible for the fact that we must repeat the inner loop as many times as we have array elements minus 1.) for i:=1 to n-1 do begin (The inner loop already iterates through the elements and compares with each other.) for j :=1 to n-i do begin (If the element is greater than the next one, then swap places.) if m[j]>m then begin k:=m[j]; m[j]:=m; m:=k; end; end; end; (Print the result:) for i:=1 to n do Write(m[i], " "); end.
Here's the result:

Here's a video tutorial


Everyone knows very well that from the class of exchange sorts the fastest method is the so-called quick sort. Dissertations are written about it, many articles on Habré are devoted to it, and complex hybrid algorithms are invented based on it. But today we will not talk about quick sort, and about another exchange method - the good old bubble sort and its improvements, modifications, mutations and variations.

The practical benefits from these methods are not so great, and many habra users went through all this in first grade. Therefore, the article is addressed to those who have just become interested in the theory of algorithms and are taking their first steps in this direction.

image: bubbles

Today we'll talk about the simplest exchange sortings.

If anyone is interested, I will say that there are other classes - selection sort, insertion sort, merge sort, distribution sorting, hybrid sorts And parallel sorts. By the way, there is also esoteric sortings. These are various fake, fundamentally unrealizable, comic and other pseudo-algorithms, about which I will write a couple of articles in the “IT Humor” hub.

But this has nothing to do with today’s lecture; we are now only interested in simple exchange sortings. There are also a lot of exchange sortings themselves (I know more than a dozen), so we will look at the so-called bubble sort and some others closely related to it.

I will warn you in advance that almost all of the above methods are very slow and there will be no in-depth analysis of their time complexity. Some are faster, some are slower, but roughly speaking, we can say that on average O(n 2). Also, I see no reason to clutter the article with implementations in any programming languages. Those interested can easily find code examples on Rosetta, Wikipedia or elsewhere.

But let's return to sorting exchanges. Ordering occurs as a result of repeated sequential search of the array and comparison of pairs of elements with each other. If the elements being compared are not sorted relative to each other, then we swap them. The only question is how exactly to bypass the array and on what basis to select pairs for comparison.

Let's start not with the standard bubble sort, but with an algorithm called...

Stupid sort

Sorting is truly stupid. We look through the array from left to right and compare neighbors along the way. If we encounter a pair of mutually unsorted elements, we swap them and return to square one, that is, to the very beginning. We go through and check the array again, if we again encounter an “incorrect” pair of neighboring elements, then we change places and start all over again. We continue until the array is little by little sorted.

“Any fool can sort like that,” you will say, and you will be absolutely right. This is why sorting is called “stupid”. In this lecture we will consistently improve and modify this method. Now he has temporary difficulty O(n 3), having made one correction, we will already bring it to O(n 2), then we’ll speed it up a little more, then a little more, and in the end we’ll get O(n log n) – and it won’t be “Quick Sort” at all!

Let's make one single improvement to the stupid sorting. Having discovered two adjacent unsorted elements during the passage and swapped them, we will not roll back to the beginning of the array, but will calmly continue traversing it to the very end.

In this case, we have before us nothing more than the well-known...

Bubble sort

Or sorting by simple exchanges. An immortal classic of the genre. The principle of operation is simple: we traverse the array from beginning to end, simultaneously swapping unsorted neighboring elements. As a result of the first pass, the maximum element will “float” to the last place. Now we again traverse the unsorted part of the array (from the first element to the penultimate one) and change the unsorted neighbors along the way. The second largest element will be in second to last place. Continuing in the same spirit, we will traverse the ever-decreasing unsorted part of the array, pushing the found maxima to the end.

If we not only push the maximums to the end, but also push the minimums to the beginning, then we succeed...

Shaker sorting

She's the same shuffle sort, she's the same cocktail sorting. The process begins as in a “bubble”: we squeeze out the maximum to the very back. After this, we turn around 180 0 and go in the opposite direction, while rolling back to the beginning not the maximum, but the minimum. Having sorted the first and last elements in the array, we do a somersault again. After going back and forth several times, we eventually end the process, ending up in the middle of the list.

Shaker sorting works a little faster than bubble sorting, since both maximums and minimums alternately migrate through the array in the required directions. The improvements, as they say, are obvious.

As you can see, if you approach the enumeration process creatively, then pushing out heavy (light) elements to the ends of the array happens faster. Therefore, the craftsmen proposed another non-standard “road map” to get around the list.

Even-odd sorting

This time we will not scurry back and forth across the array, but will again return to the idea of ​​a systematic walk around from left to right, but we will only take a wider step. On the first pass, elements with an odd key are compared with their neighbors based in even places (the 1st is compared with the 2nd, then the 3rd with the 4th, the 5th with the 6th, and so on). Then, on the contrary, we compare/change “even” elements with “odd” ones. Then again “odd-even”, then again “even-odd”. The process stops when, after two consecutive passes through the array (“odd-even” and “even-odd”), not a single exchange has occurred. So, they sorted it.

In a regular “bubble”, during each pass we systematically squeeze the current maximum to the end of the array. If you jump along even and odd indices, then all the more or less large elements of the array are simultaneously pushed to the right one position in one run. It works faster this way.

Let's look at the last one redecorated* For Sortuvannya bulbashka** - Sorting by comb***. This method organizes very quickly, O(n 2) is its worst difficulty. On average over time we have O(n log n), and the best, you won’t even believe it, O(n). That is, a very worthy competitor to all sorts of “quick sorts” and this, mind you, without the use of recursion. However, I promised that today we will not delve into cruising speeds, so I’ll stop talking and go directly to the algorithm.


It's all the turtles' fault

A little background. In 1980, Włodzimierz Dobosiewicz explained why bubble sort and its derivatives work so slowly. It's all because of the turtles. “Turtles” are small elements that are located at the end of the list. As you may have noticed, bubble sorts are focused on “rabbits” (not to be confused with Babushkin’s “rabbits”) – large-value elements at the beginning of the list. They move very briskly towards the finish line. But slow reptiles crawl to the start reluctantly. You can customize the tortilla using combs.

image: guilty turtle

Comb sorting

In “bubble”, “shaker” and “odd-even”, when iterating through an array, neighboring elements are compared. The main idea of ​​the “comb” is to initially take a sufficiently large distance between the elements being compared and, as the array is ordered, to narrow this distance down to the minimum. In this way, we sort of comb the array, gradually smoothing it into increasingly neat strands.

It is better to take the initial gap between the compared elements not just any, but taking into account a special value called reducing factor, the optimal value of which is approximately 1.247. First, the distance between elements is equal to the size of the array divided by reduction factor(the result, of course, is rounded to the nearest integer). Then, after traversing the array with this step, we again divide the step by reduction factor and go through the list again. This continues until the index difference reaches one. In this case, the array is sorted with a regular bubble.

The optimal value has been established experimentally and theoretically reduction factor:

When this method was invented, few people paid attention to it at the turn of the 70s and 80s. A decade later, when programming was no longer the province of IBM scientists and engineers, but was already rapidly gaining mass popularity, the method was rediscovered, researched and popularized in 1991 by Stephen Lacy and Richard Box.

That’s actually all I wanted to tell you about bubble sorting and others like it.

- Notes

* shortened ( Ukrainian) - improvement
** Sorted by bulb ( Ukrainian) – Bubble sort
*** Sorting by comb ( Ukrainian) – Comb sorting


Let's arrange the array from top to bottom, from the zero element to the last.

The idea of ​​the method: the sorting step consists of going from bottom to top through the array. Along the way, pairs of neighboring elements are viewed. If the elements of a certain pair are in the wrong order, then we swap them.

After zero passes through the array, the lightest element appears at the top - hence the analogy with a bubble. The next pass is made to the second element from the top, thus lifting the second largest element to the correct position...

We make passes along the ever-decreasing lower part of the array until only one element remains in it. This is where the sorting ends, since the sequence is ordered in ascending order.

Template void bubbleSort(T a, long size) ( long i, j; T x; for(i=0; i< size; i++) { // i - pass number for(j = size-1; j > i; j--) ( // inner loop loop if (a > a[j]) ( x=a; a=a[j]; a[j]=x; ) ) ) )

The average number of comparisons and exchanges has a quadratic growth order: Theta(n 2), from which we can conclude that the bubble algorithm is very slow and ineffective.
However, it has a huge advantage: it is simple and can be improved in every way. What are we going to do now?

First, let's consider a situation where no exchanges occurred on any of the passes. What does it mean?

This means that all pairs are in the correct order, so the array is already sorted. And it makes no sense to continue the process (especially if the array was sorted from the very beginning!).

So, the first improvement of the algorithm is to remember whether any exchange was made on a given pass. If not, the algorithm terminates.

The improvement process can be continued if you remember not only the fact of the exchange itself, but also the index of the last exchange k. Indeed: all pairs of neighboring elements with indices less than k are already located in the required order. Further passes can end at index k, instead of moving to a predetermined upper bound i.

A qualitatively different improvement of the algorithm can be obtained from the following observation. Although a light bubble at the bottom will rise to the top in one pass, heavy bubbles will sink at a minimal rate: one step per iteration. So the array 2 3 4 5 6 1 will be sorted in 1 pass, but sorting the sequence 6 1 2 3 4 5 will require 5 passes.

To avoid this effect, you can change the direction of successive passes. The resulting algorithm is sometimes called " shaker sorting".

Template void shakerSort(T a, long size) ( long j, k = size-1; long lb=1, ub = size-1; // boundaries of the unsorted part of the array Tx; do( // pass from bottom to top for(j=ub; j>0; j--) ( if (a > a[j]) ( x=a; a=a[j]; a[j]=x; k=j; ) ) lb = k+1; // pass from top to bottom for (j=1; j<=ub; j++) { if (a >a[j]) ( x=a; a=a[j]; a[j]=x; k=j; ) ) ub = k-1; ) while (lb< ub); }

To what extent have the described changes affected the effectiveness of the method? The average number of comparisons, although decreased, remains O(n 2), while the number of exchanges has not changed at all. The average (aka worst) number of operations remains quadratic.

Additional memory is obviously not required. The behavior of the improved (but not the initial) method is quite natural; an almost sorted array will be sorted much faster than a random one. Bubble sorting is stable, but shaker sorting loses this quality.

In practice, the bubble method, even with improvements, works, alas, too slowly. And therefore it is almost never used.

When working with data arrays, the task often arises sorting in ascending or descending order, i.e. ordering. This means that the elements of the same array must be arranged strictly in order. For example, in the case of ascending sort, the preceding element must be less than (or equal to) the subsequent one.

Solution

There are many sorting methods. Some of them are more effective, others are easier to understand. Sorting is fairly easy to understand bubble method, which is also called simple exchange method. What is it, and why does it have such a strange name: “bubble method”?

As you know, air is lighter than water, so air bubbles float. It's just an analogy. In ascending bubble sorting, lighter (smaller value) elements gradually “float” to the beginning of the array, and heavier ones, one after another, fall to the bottom (to the end of the array).

The algorithm and features of this sorting are as follows:

  1. During the first pass through the array, the elements are compared with each other in pairs: the first with the second, then the second with the third, then the third with the fourth, etc. If the previous element is larger than the subsequent one, then they are swapped.
  2. It is not difficult to guess that gradually the largest number turns out to be the last. The rest of the array remains unsorted, although there is some movement of lower value elements to the beginning of the array.
  3. On the second pass, there is no need to compare the last element with the penultimate one. The last element is already in place. This means that the number of comparisons will be one less.
  4. On the third pass, there is no longer any need to compare the penultimate and third element from the end. Therefore, the number of comparisons will be two less than during the first pass.
  5. After all, when iterating through an array, when there are only two elements left to compare, only one comparison is performed.
  6. After this, there is nothing to compare the first element to, and therefore a final pass through the array is unnecessary. In other words, the number of passes through the array is m-1, where m is the number of elements in the array.
  7. The number of comparisons in each pass is equal to m-i, where i is the number of passes through the array (first, second, third, etc.).
  8. When exchanging array elements, a “buffer” (third) variable is usually used, where the value of one of the elements is temporarily placed.

Pascal program:

const m = 10 ; var arr: array [ 1 .. m ] of integer ; i, j, k: integer ; begin randomize; write( "Source array: ") ; for i : = 1 to m do begin arr[ i] : = random(256 ) ; write (arr[ i] : 4 ) ; end ; writeln ; writeln ; for i : = 1 to m- 1 do for j : = 1 to m- i do if arr[ j] > arr[ j+ 1 ] then begin k : = arr[ j] ; arr[ j] : = arr[ j+ 1 ] ; arr[ j+ 1 ] : = k end ; write( "Sorted array: ") ; for i : = 1 to m do write (arr[ i] : 4 ) ; writeln ; readln end .