The fake coin problem goes like this: you have a heap of coins but one of them is fake and you know it has a different weight to the real coins. Luckily you have a balance with you, and the aim of the problem is to find out which coin is fake in the least amount of balance-uses as possible.
Let's start with a simplified version of the problem: you know for a fact that the fake coin is heavier. When I first considered the problem I thought-up the following algorithm:
0) start off with a pile of coins in the 'unknown' pile
1) get any two coins and compare their weights
2) if one is heavier, that's the fake coin, and if not, put both coins in the 'real' pile.
3) repeat step 1
4) if there are no coins left in the unknown pile and you haven't found the fake coin, get a better quality balance.
So if you have n coins it's going to take n/2 comparisons in the worst case meaning my algorithm has a time complexity of O(n). Not too bad, but the ability to compare the weights of several coins at once using the balance makes me think there's a better way of cutting down the problem.
It's often a good idea to consider smaller versions of the same problem in order to make an algorithm to handle all possibles sizes. The ideal size to work with is 3 coins: 2 real, 1 fake. You would need to compare two coins and if one's heavier then you know that's the fake coin but if they are both balanced then you know the coin that isn't being balanced (the coin in the unknown pile) must be the fake coin. This can be extrapolated to larger numbers of coins. Lets say I have 9 coins, I can just split that into 3 groups of 3 coins and compare the groups. Just like before, if the two groups I compare are equal in weight then they contain only real coins meaning the fake coin must be in the other group. Because this group has 3 coins we can repeat the process using the same method until we find the fake coin.
Let's try to get this algorithm into some pseudocode. The input is a list of n coins with each value corresponding to their weight. We'll say that our algorithm is only allowed to compare two values or groups of values in the list at a time, which can be done via my 'FindHeavier' function. Our 'FindHeavier' function simply compares the weight of the two sections of the list and returns the heavier one and if they are of equal weight, the function returns -1, but I'll omit it from the code because it's not really important for what we're doing (dat abstraction). Now, considering our 'groups' of coins will be merely sections of the list, we need some notation for specifying a certain range of elements from within out coin list. The way python and C++ does it (and I'm assuming some more languages) is along the lines of L[a..b] where 'a' is the index of the first element of the list L and b-1 is the index of the last element in the range. L[a..b] means "make a new list with the elements with index 'a' through to 'b' including 'a' but not including 'b'". As an example if L = [5,3,7,4,1] then L[0..3] = [5,3,7] i.e. the elements of indexes 0,1 and 2. Why not include both 'a' and 'b'? Let's look at some benefits: If you have a list L of length n you can choose some index 'b' in the middle to split it with and then assign L[0..b] to 'firstHalf' and L[b..n] to 'secondHalf' where the actual element with index 'b' will only show up in secondHalf (meaning there's no overlap), so in that sense it looks pretty clean. Also if you want to know how many elements will show up in your new list, L[a..b], the answer is just b-a. An extra thing to note is that L[i..i+1] will have 1 element so we'll say L[i..i] is an empty list.
We'll be specifying the three group ranges in the code i.e. L[0..a], L[a..b] and L[b..n] and we need to find a way to express a and b in terms of n. We want all three groups to have the same amount of coins in each but if that's not possible (i.e. n % 3 != 0) we can leave the extra coins with the third group so that the two groups being compared have the same number of coins (meaning the weight comparison will actually be helpful in finding the fake coin). Let's look at a list with 9 elements (the contents of each element are unimportant for now):
Starting with 'a', we have n = 9 elements and end up with a=3. This makes me think a=n/3. In turn, b=2n/3. Sounds reasonable, let's look at a list with 10 elements.
This time n = 10 so we can't just say a = n/3 because n/3 = 3.333. But we can floor it to get 3 so let's say a = floor(n/3). Likewise, b = floor(2*n/3). How about a list of only 2 elements. In this case we want the first group to consist of the first element and the second group to consist of the second element, because one of them will definitely be the fake coin so we don't need to worry about a third group. As for the third group, we don't really care how many elements end up in it through our notation because our algorithm will return the solution without needing to ever consider the group.
With our previous definition of a, a = floor(n/3). In this instance a = 1 and n = 2. floor(2/3) = 0. Hmm. What about a = floor((n-1)/3)+1. so when n = 2, a = floor((2-1)/3)+1 = floor(1/3)+1 = 0+1 = 1. This works, but does it work for the above examples? Initially we had n = 9 and a = 3. floor((9-1)/3)+1 = 2+1 = 3. So far so good. What about n = 10 and a = 3? floor((10-1)/3)+1 = 3+1 = 4. Nope. If we just say that a = floor(n/3) and b = floor(2*n/3) then a = b = 0 and as L[0..0] is an empty list, the first two groups are empty and the third group contains the two elements, which is not what we want (because our code is only expects to compare the first two groups). Here's what using the floor(n/3) and floor(2n/3) actually gives us:
But then why do we get this outcome? The reason is that by flooring the n/3 or 2n/3 we ensure that the first two groups have an equal number of elements and the third group has either an equal number of elements or 1 or 2 more due to the extras that get ignored in the flooring process. This means when we have 2 elements the only way to satisfy this condition is to have the first two groups have 0 and the third group to have the 2 leftover elements. SO the way to solve this problem is to go from having the third group getting an equal number of elements + leftovers to having an equal number or less. We can use the ceiling function to help us with this. Let's say a = ceil(n/3) and b = ceil(2*n/3). Now when n = 2, a = 1 and b = 2. When n = 9, a = 3 and b = 3. When n = 10, a = 4 and b = 8.
The reason that switching from floor to ceiling changes the amount that the third group has is that with the floor function you just assume that the list is divisible by 3 and act like there are less elements than there really are meaning the third group gets the equal number of elements plus the 1 or 2 extras. The ceiling function does the opposite by assuming you have more elements than you really do and by the time you get to the third group, it pays the price by having equal-to or less-than the number of elements of the first two groups. Let's go back to our examples and see what the look like now:
Our n = 9 is exactly the same:
Our n = 10 is not:
This may look a bit imbalanced but the difference in size of the first two groups and the third will never exceed 2 elements (because as soon as the difference is three the algorithm will distribute those three elements evenly between the groups).
.... And our n = 2 is looking the way we originally wanted it:
So there we go we have got this list-splicing down-pat. So finally here is the pseudocode.
0 FindFake(L):
1 {
2 heavierGroup <-- FindHeavier(L[0..ceil(n/3)],L[ceil(n/3)..ceil(2*n/3)])
3 if (heavierGroup != -1) return FindFake(heavierGroup)
4 else return FindFake(L[ceil(2*n/3)..n])
5 }
So here we have a neat recursive function which, with each recursive call, cuts the problem down by a factor of 3 (assuming n % 3 = 0 for a particular recursion). This means we have a complexity of O(log3(n)) which as I explained in the complexity post, is just O(logn). Very nice.
This post has really been more of an exploration of splicing lists into equally sized groups than it has been about the fake coin problem itself, but hopefully you've learnt something. I'm thinking next up I'll do a tiny post on invariants and then some bigger posts on trees like heaps and binary search trees and spanning tree algorithms.
Btcturk,Binance,Paribu güvenilir mi değil mi yazlarımız:
ReplyDeleteBtcturk güvenilir mi diye merak ediyorsanız tıklayın: btcturk güvenilir mi
Binance güvenilir mi diye merak ediyorsanız tıklayın: binance güvenilir mi
Paribu güvenilir mi diye merak ediyorsanız tıklayın: paribu güvenilir mi
Paribu güvenilir mi diye merak ediyorsanız tıklayın: paribu güvenilir mi