Practical Java Examples of the Big O Notation – 大O符号的实用Java实例

最后修改: 2018年 6月 20日

中文/混合/英文(键盘快捷键:t)

1. Overview

1.概述

In this tutorial, we’ll talk about what Big O Notation means. We’ll go through a few examples to investigate its effect on the running time of your code.

在本教程中,我们将讨论大O符号的含义。我们将通过几个例子来研究它对你的代码运行时间的影响。

2. The Intuition of Big O Notation

2.大O记号的直观性

We often hear the performance of an algorithm described using Big O Notation.

我们经常听到用Big O Notation描述算法的性能。

The study of the performance of algorithms – or algorithmic complexity – falls into the field of algorithm analysis. Algorithm analysis answers the question of how many resources, such as disk space or time, an algorithm consumes.

对算法的性能–或算法复杂性–的研究属于算法分析领域。算法分析回答了一个算法消耗多少资源的问题,如磁盘空间或时间。

We’ll be looking at time as a resource. Typically, the less time an algorithm takes to complete, the better.

我们将把时间作为一种资源来看待。通常情况下,一个算法完成的时间越少越好。

3. Constant Time Algorithms – O(1)

3.恒定时间算法 – O(1)

How does this input size of an algorithm affect its running time? Key to understanding Big O is understanding the rates at which things can grow. The rate in question here is time taken per input size.

算法的这种输入规模是如何影响其运行时间的?理解大O的关键是理解事物的增长速度。这里所说的速度是指每输入大小所花费的时间。

Consider this simple piece of code:

考虑一下这段简单的代码。

int n = 1000;
System.out.println("Hey - your input is: " + n);

Clearly, it doesn’t matter what n is, above. This piece of code takes a constant amount of time to run. It’s not dependent on the size of n.

显然,n是多少并不重要,上面。这段代码的运行时间是恒定的。它不依赖于n的大小。

Similarly:

同样地。

int n = 1000;
System.out.println("Hey - your input is: " + n);
System.out.println("Hmm.. I'm doing more stuff with: " + n);
System.out.println("And more: " + n);

The above example is also constant time. Even if it takes 3 times as long to run, it doesn’t depend on the size of the input, n. We denote constant time algorithms as follows: O(1). Note that O(2), O(3) or even O(1000) would mean the same thing.

上面的例子也是恒定时间。即使它的运行时间是3倍,它不依赖于输入的大小n。我们将恒定时间算法表示如下。O(1)。请注意,O(2)O(3)甚至O(1000)将意味着同样的事情。

We don’t care about exactly how long it takes to run, only that it takes constant time.

我们不关心它到底需要运行多长时间,只关心它需要不断的时间。

4. Logarithmic Time Algorithms – O(log n)

4、对数时间算法 – O(log n)

Constant time algorithms are (asymptotically) the quickest. Logarithmic time is the next quickest. Unfortunately, they’re a bit trickier to imagine.

恒定时间算法(渐进地)是最快的。对数时间是第二快的。不幸的是,它们的想象力有点棘手。

One common example of a logarithmic time algorithm is the binary search algorithm. To see how to implement binary search in Java, click here.

对数时间算法的一个常见例子是二进制搜索算法。要了解如何在Java中实现二进制搜索,点击这里

What is important here is that the running time grows in proportion to the logarithm of the input (in this case, log to the base 2):

这里重要的是,运行时间的增长与输入的对数成正比(在这种情况下,对数为底2):

for (int i = 1; i < n; i = i * 2){
    System.out.println("Hey - I'm busy looking at: " + i);
}

If n is 8, the output will be the following:

如果n是8,输出结果将是如下。

Hey - I'm busy looking at: 1
Hey - I'm busy looking at: 2
Hey - I'm busy looking at: 4

Our simple algorithm ran log(8) = 3 times.

我们的简单算法运行了log(8) = 3次。

5. Linear Time Algorithms – O(n)

5.线性时间算法 – O(n)

After logarithmic time algorithms, we get the next fastest class: linear time algorithms.

在对数时间算法之后,我们得到下一个最快的类别。线性时间算法。

If we say something grows linearly, we mean that it grows directly proportional to the size of its inputs.

如果我们说一个东西是线性增长的,我们的意思是它的增长与它的输入大小成正比。

Think of a simple for loop:

想想看,一个简单的for循环。

for (int i = 0; i < n; i++) {
    System.out.println("Hey - I'm busy looking at: " + i);
}

How many times does this for loop run? n times, of course! We don’t know exactly how long it will take for this to run – and we don’t worry about that.

这个for循环要运行多少次? n次,当然!我们不知道这个循环到底要运行多久–我们不担心这个。我们不知道这到底要运行多长时间–我们也不担心这个问题。

What we do know is that the simple algorithm presented above will grow linearly with the size of its input.

我们知道的是,上面介绍的简单算法将随着其输入的大小而线性增长。

We’d prefer a run time of 0.1n than (1000n + 1000), but both are still linear algorithms; they both grow directly in proportion to the size of their inputs.

我们更希望运行时间为0.1n,而不是(1000n + 1000),但两者都是线性算法;它们都直接与输入的大小成比例增长。

Again, if the algorithm was changed to the following:

同样,如果将算法改为如下。

for (int i = 0; i < n; i++) {
    System.out.println("Hey - I'm busy looking at: " + i);
    System.out.println("Hmm.. Let's have another look at: " + i);
    System.out.println("And another: " + i);
}

The runtime would still be linear in the size of its input, n. We denote linear algorithms as follows: O(n).

运行时间仍将是其输入大小的线性,n。我们将线性算法表示如下:O(n)

As with the constant time algorithms, we don’t care about the specifics of the runtime. O(2n+1) is the same as O(n), as Big O Notation concerns itself with growth for input sizes.

与恒定时间算法一样,我们并不关心运行时间的具体细节。O(2n+1)O(n)相同,因为大O符号关注的是输入规模的增长。

6. N Log N Time Algorithms – O(n log n)

6.N个对数N时间的算法 – O(n log n)

n log n is the next class of algorithms. The running time grows in proportion to n log n of the input:

n log n是下一类算法。运行时间的增长与输入的n log n成比例。

for (int i = 1; i <= n; i++){
    for(int j = 1; j < n; j = j * 2) {
        System.out.println("Hey - I'm busy looking at: " + i + " and " + j);
    }
}

For example, if the n is 8, then this algorithm will run 8 * log(8) = 8 * 3 = 24 times. Whether we have strict inequality or not in the for loop is irrelevant for the sake of a Big O Notation.

例如,如果n是8,那么这个算法将运行8 * log(8) = 8 * 3 = 24次。我们在for循环中是否有严格的不等式,对于大O记号来说是不重要的。

7. Polynomial Time Algorithms – O(np)

7.多项式时间算法 – O(np)

Next up we’ve got polynomial time algorithms. These algorithms are even slower than n log n algorithms.

接下来,我们得到了多项式时间算法。这些算法比n log n算法还要慢。

The term polynomial is a general term which contains quadratic (n2), cubic (n3), quartic (n4), etc. functions. What’s important to know is that O(n2) is faster than O(n3) which is faster than O(n4), etc.

多项式是一个总称,它包含了二次(n2)、三次(n3)、四次(n4)等函数。需要知道的是,O(n2)O(n3)快,而后者比O(n4)快,等等。

Let’s have a look at a simple example of a quadratic time algorithm:

让我们来看看二次元时间算法的一个简单例子。

for (int i = 1; i <= n; i++) {
    for(int j = 1; j <= n; j++) {
        System.out.println("Hey - I'm busy looking at: " + i + " and " + j);
    }
}

This algorithm will run 82 = 64 times. Note, if we were to nest another for loop, this would become an O(n3) algorithm.

这个算法将运行82=64次。注意,如果我们再嵌套一个for循环,这将成为一个O(n3)算法。

8. Exponential Time Algorithms – O(kn)

8.指数时间算法 – O(kn)

Now we are getting into dangerous territory; these algorithms grow in proportion to some factor exponentiated by the input size.

现在我们正在进入危险的领域;这些算法的增长与输入大小的某个指数成正比。

For example, O(2n) algorithms double with every additional input. So, if n = 2, these algorithms will run four times; if n = 3, they will run eight times (kind of like the opposite of logarithmic time algorithms).

例如,O(2n)算法每增加一个输入就增加一倍。因此,如果n=2,这些算法将运行四次;如果n=3,它们将运行八次(有点像对数时间算法的反面)。

O(3n) algorithms triple with every additional input, O(kn) algorithms will get k times bigger with every additional input.

O(3n)算法每增加一个输入就增加三倍,O(kn)算法每增加一个输入就增大k倍。

Let’s have a look at a simple example of an O(2n) time algorithm:

让我们看一下一个简单的O(2n)时间算法的例子。

for (int i = 1; i <= Math.pow(2, n); i++){
    System.out.println("Hey - I'm busy looking at: " + i);
}

This algorithm will run 28 = 256 times.

这个算法将运行28=256次。

9. Factorial Time Algorithms – O(n!)

9.因数时间算法 – O(n!)

In most cases, this is pretty much as bad as it’ll get. This class of algorithms has a run time proportional to the factorial of the input size.

在大多数情况下,这几乎是最糟糕的情况了。这类算法的运行时间与输入大小的系数成正比。

A classic example of this is solving the traveling salesman problem using a brute-force approach to solve it.

这方面的一个经典例子是使用暴力方法解决旅行推销员问题。

An explanation of the solution to the traveling salesman problem is beyond the scope of this article.

对旅行推销员问题的解决方案的解释超出了本文的范围。

Instead, let’s look at a simple O(n!) algorithm, as in the previous sections:

相反,让我们看看一个简单的O(n!)算法,就像前几节一样。

for (int i = 1; i <= factorial(n); i++){
    System.out.println("Hey - I'm busy looking at: " + i);
}

where factorial(n) simply calculates n!. If n is 8, this algorithm will run 8! = 40320 times.

其中factorial(n)简单地计算了n!如果n是8,这个算法将运行8! = 40320次。

10. Asymptotic Functions

10.渐近函数

Big O is what is known as an asymptotic function. All this means, is that it concerns itself with the performance of an algorithm at the limit – i.e. – when lots of input is thrown at it.

大O是所谓的渐近函数这意味着,它关注的是算法在极限时的性能–即–当大量输入被抛向它时。

Big O doesn’t care about how well your algorithm does with inputs of small size. It’s concerned with large inputs (think sorting a list of one million numbers vs. sorting a list of 5 numbers).

大O并不关心你的算法对小规模输入的表现如何。它关注的是大的输入(想想对一百万个数字的列表进行排序与对五个数字的列表进行排序)。

Another thing to note is that there are other asymptotic functions. Big Θ (theta) and Big Ω (omega) also both describes algorithms at the limit (remember, the limit this just means for huge inputs).

另外需要注意的是,还有其他渐进函数。大Θ(theta)和大Ω(omega)也都描述了极限时的算法(记住,极限这只是指对于巨大的输入)。

To understand the differences between these 3 important functions, we first need to know that each of Big O, Big Θ, and Big Ω describes a set (i.e., a collection of elements).

为了理解这3个重要函数之间的区别,我们首先需要知道,大O、大Θ和大Ω中的每一个都描述了一个集合(即一个元素集合)。

Here, the members of our sets are algorithms themselves:

在这里,我们集合的成员是算法本身。

  • Big O describes the set of all algorithms that run no worse than a certain speed (it’s an upper bound)
  • Conversely, Big Ω describes the set of all algorithms that run no better than a certain speed (it’s a lower bound)
  • Finally, Big Θ describes the set of all algorithms that run at a certain speed (it’s like equality)

The definitions we’ve put above are not mathematically accurate, but they will aid our understanding.

我们上面的定义在数学上并不准确,但它们会帮助我们理解。

Usually, you’ll hear things described using Big O, but it doesn’t hurt to know about Big Θ and Big Ω.

通常情况下,你会听到用大O来描述事物,但了解大Θ和大Ω也无妨。

11. Conclusion

11.结论

In this article, we discussed Big O notation, and how understanding the complexity of an algorithm can affect the running time of your code.

在这篇文章中,我们讨论了大O符号,以及了解算法的复杂性会如何影响你的代码的运行时间。

A great visualization of the different complexity classes can be found here.

在这里可以找到一个关于不同复杂性等级的伟大的可视化

As usual, the code snippets for this tutorial can be found over on GitHub.

像往常一样,本教程的代码片段可以在GitHub上找到over