Wheee! That reminds me of the old time. NOI '01 was the practice paper I did before going for my first NOI in '02. Anyway, Juliana wants a new quiz. So I'll take my favourite question in all NOI. Not because it's hard, it's extremely easy. But the thing you learn from this question, gosh! A lot. Especially when you're an experienced university students (a Sec 3 student like I was wouldn't learn as much, though it does teach me quite a bit of thing).

So the question is very simple, given an array of n integers, calculate the frequency of the mode of all those integers (mode is the number that occurs the most number of time in the array). You can choose to write in C, C++, C#, or Java. To make thing easier, here is the function prototype:

In C:

In C++:

In Java:

In C#: similar to Java.

Now here's the catch... you have to provide 4 different algorithms:

(Those who have heard this question from me before, refrain from spoiling!!)

- Chris

P.S. The actual question can be found here (look for Task 1).

Edit: Thanks to Edwin for providing argument that the challenge is not very precise (sloppy in fact), read comments 1 to 7 below for additional information. The challenge has been changed slightly to use tighter bound. Again, apologize for being sloppy with the bounds. The spirit of the challenge remains, to find different algorithms that does the same thing with different performance characteristics.

So the question is very simple, given an array of n integers, calculate the frequency of the mode of all those integers (mode is the number that occurs the most number of time in the array). You can choose to write in C, C++, C#, or Java. To make thing easier, here is the function prototype:

In C:

`int find_mode_frequency(int* ints, int n);`

In C++:

`int FindModeFrequency(const vector<int>& ints);`

In Java:

`int findModeFrequency(int[] ints);`

In C#: similar to Java.

Now here's the catch... you have to provide 4 different algorithms:

- with Theta(n^2);
- with O(n^2), Omega(n);
- with Theta(n lg n);
- with Theta(n).

(Those who have heard this question from me before, refrain from spoiling!!)

- Chris

P.S. The actual question can be found here (look for Task 1).

Edit: Thanks to Edwin for providing argument that the challenge is not very precise (sloppy in fact), read comments 1 to 7 below for additional information. The challenge has been changed slightly to use tighter bound. Again, apologize for being sloppy with the bounds. The spirit of the challenge remains, to find different algorithms that does the same thing with different performance characteristics.

## 24 comments:

Are you sure we need 4 different algorithms?

Because an O(n) algorithm is an O(n lg n) algorithm and an O(n^2).

Remember that O is a tight bound. An O(n) algorithm is not an O(n lg n) algorithm. An O(n lg n) algorithm is not an O(n^2). (If O were to be a non-capital letter, then you're right.)

The challenge is to find the algorithm with the correct bound.

Correction:

s/tight bound/tight upper bound/

I thought O is not tightly upper bound unless specified. By default O is just an upper bound with a lot of freedom to move around.

You're confusing big-O and small-o (and probably big-Theta). big-O bounded the number of calculations above by a constant factor, i.e. it's a tight upper bound, any less and it won't bound the function anymore. small-o simmply bound above, it's not tight.

Big-Theta is bounded tightly above and below (not everything can have big-Theta, only if it has big-O and big-Omega of the same order).

P.S. If you're unhappy with my definitions (taken from CLRS book), you can take the questions as 1 for Theta(n^2), another for O(n^2) and Omega(n), another for Theta(n lg n), yet another for Theta(n).

I double-checked, and from strictly mathematical perspective, you're right. I'm used to using small-o for non-tight upper bound and big-O for tight upper bound. But you're perfectly right to argue that big-O may not be asympotically tight. So I should really use Theta(n lg n) and Theta(n) instead of being sloppy and using big-O.

Thanks for the arguments, it helps remind me of the mathematical definition of big-O.

So for now, take it as if they are Theta instead of big-O. The challenge remains the same.

Under the section "Formal Defintions" of this URL: Big O Notation

f(x) = O(g(x)) if and only if f(x) <= M*g(x) for x > x_0 where M is a real number

so if f(x) = 1, f(x) <=1*(n^2) for x > 1, therefore an algorithm that takes constant time is also upper bound by n^2 and therefore is Big O(n^2).

Correct me if I am wrong. Both Prof Roger Zimmermann & Prof. David Hsu agrees with me when I posed this question to them.

Yes, as with my earlier comment, I checked the maths, and I was wrong. So I've edited the challenge definition to use Theta. So there's no sloppiness anymore.

No excuse for using big-O other than being sloppy. To use Theta, I had to rethink my solutions and make sure that they're "Theta-compatible" (a made up word I made myself), and they are (except for the second case).

Interesting thing to know:

I was chatting with a colleague on this and this matters somehow was brought up (the usual, "heh, I just realized this morning that my perception of big-O is wrong..." He pointed out that while mathematically big-O does not imply tight bound, it has been *abused* by many literature to mean Theta. Most of the time, computer scientists are sloppy enough to describe tight bound with big-O. In the context of this quiz, it's probably obvious (or not) that Theta is the one requested.

Btw, to get asymptotically tight bound of Theta(n) is pretty difficult and depends on implementation (the idea is simple). You probably won't be able to implement Theta(n) solution completely by your own (i.e. without utilizing some external library codes). The other three solutions can be written completely on your own without relying on other libraries.

If this isn't proof that NUS Computing professors are a joke, I don't know what is.

I suppose "Prof Roger Zimmermann & Prof. David Hsu" are responsible for your sloppy education?

Use "external libraries" for complexity proofs? Yea, right. And I have a crystal ball that has a O(1) solution... get a clue about problem reduction.

Or just enroll in a better school and get a proper education.

Simon,

Prof. Roger Zimmermann & Prof. David Hsu taught us that O(n) algorithms are also O(n^2) etc. And that is correct.

They taught us the truth. Please don't talk about them like they are a joke.

I think NUS is doing a great job of educating the best minds from this country & developing nations which I believe will eventually help this country for the good.

THESE are the BEST MINDS in your country? Singapore? Oh my gosh... really? How sad..

Edwin,

You can ignore his comment. That's just flame invitation. Replying will just suck you into it. It's better to spend time doing homework or studying or solving the quizzes. (: And you don't need to worry about your profs either, if I were them, I won't worry about such petty comments. (:

Now to reply on usage of external libraries (which is a valid comment):

Yes, you need to use an external libraries, because your own implementation will highly likely not be O(n). Unfortunately, in this respect you're right. NUS does not teach the stuffs needed to solve this question in O(n). If you want to, of course you can implement it. You just need to do enough reading to find the right implementation.

Your complexity analysis should be based on this optimized implementation. Only then will you get O(n). Otherwise, you can always say stuffs like average O(n) (similar to saying that naive/randomized quicksort is average O(n lg n) while the optimized order statistics based is O(n lg n) in worst case).

"Yes, you need to use an external libraries, because your own implementation will highly likely not be O(n)."

Please name the professor in NUS who supports or taught you the above statement.

Flame? Please. I'm merely using your own words, "best minds", professor's names, etc. Shooting yourself in the foot, is what it is..

It's easy. (: Solve it. As simple as that. If you implement the solution entirely by yourself, without using any C++ STL or Java libraries help, I applaud you. Though you'll still lose in algorithms competition. Let's see, you implementing the data structure, probably 10 minutes if you already know the optimization (provided you've no bug). I, using the library, one line. I was taught well that if it is a common data structures, some other people would have implement it, and implement it much better than I would have hoped. Why is using such libraries wrong? I know how the libraries work, I'm just not interested in spending time optimizing my data structure. As simple as that. Now I wonder, if you were to be employed and you solve all your problems by implementing everything by yourself, are you going to survive? My guess is not really.

Even at work, we all try to find ready implementations (open source? fellow colleagues wrote one already?), especially of commonly available stuffs (blocking counter, monitor, hashmaps, hashsets, template engine, HTTP servers). They are well-tested (less bugs is good), they are optimized. The only time I will augment this data structure is when I need to specialize them. Even then I'll try to use as much of the available code as possible. Now who's better taught now. You? Or me? ;)

Chris. You are seriously messed up about complexity analysis. I suggest you:

(a) Show your latest post to any professor worth their salt (if any exist NUS). Let them point out your fallacy.

(b) Answer this: What is the complexity of computing the value of sqrt(n), n being any real number?

It is patently clear that NUS School of Computing has regretfully given you a poor education, even worst, a inflated sense of self-delusion.

Look at your posts: lengthy, imprecise, fickle, erroneous, immature. You evangelize topic you know little about. What more, in the name of your school.

Certainly, the NUS School of Computing has lost much credibility due to your public ignorance.

"It's easy. (: Solve it. "

One more thing. Solving a problem by coding a solution generally reveals nothing about its algorithmic complexity.

It is a well-known fact. Again, it reveals your lack of knowledge.

If you don't know how to properly demonstrate complexity proofs, please go back to freshmen year.

Well Simon, like you, Edwin also challenged Chris' opinions but chose to do so with valid arguments instead of resorting to verbal offense. I am sure if you do the same, there will be more useful discussions arising from your comments that readers can appreciate.

Juliana: These are TEXTBOOK knowledge. Algorithmic complexity and problem reduction has been studied for hundreds of years. There is nothing to debate about in these trivial problems.

If a fallacy is recalcitrantly wrong, it is because of poor understanding, poor teaching, poor quality of the student, or plain old stubbornness.

Offensive as it may be, it has to be stated.

I'm not even asking you to solve it by coding it. Just pseudocode and analysing it is sufficient for me. I can describe an O(n) solution without coding a single bit.

(a) Now if you're so smart then, why not you point out my fallacy. I mean what's wrong with you pointing it out?

(b) It depends. Approximation based on floating point representation (without applying Newton's method) can be done in O(1). Then again, obviously you are expecting something that works on every real numbers. And I can very humbly tell you I don't know.

"There is nothing to debate about in these trivial problem". Let me know what's this trivial problem. (: If it is trivial, I'm sure we can fix that easily without sorting to your acidic tone. I'm not deluded to think that I'm the smartest person on earth.

Juliana, I'm pretty sure you're amused by his tone too. Someone who's that smart wouldn't need to resort to such tone, would he? His posts has gone from bad to worse in tone. I wonder, if all smart people act that way, I wouldn't want to be one. (:

P.S. Now if everyone in his school acts his way, I'm glad I'm in NUS and not his school. (If he happened to be in NUS, it just worsen my already not too good perception of NUS. Let's hope not.) And here I thought the better educated you are, the more eloquent you would be...

Please. I am not affiliated to NUS nor the Computing School but I have been invited by your country to several advisory committees (which I have selectively declined). It also implies a lack of direction and leadership.

With your small capacity, you need to take a bet, a huge gamble on the coming revolution. I'm not a betting person.

Singaporeans are generally disinterested, contented followers (with exceptions). You can, for instance, build various outsourcing industries without the credit.

Disclaimer: I'm not Singaporean.

I share your sentiment (fortunately or unfortunately). I had a discussion about this recently, initiated by a Singaporean I respected. And we ended up with roughly similar conclusions (no, worse than what you said). I also admit that it is a generalization and there are some exceptionally worthy people here. That's why I have my hesitation staying here for a job.

I've no idea how our discussions turn into this though...

Post a Comment