Ok, so I was going through the Longest Common Subsequence problem in CLRS, and I understand it for the most part. What I don't understand is why it is so needlessly complicated.
The algorithm uses a normal nested for-loop to go through both sequences, and every time it finds a matching element, it does a bunch of crap with tables and arrows. Why not just add this matching element to a vector, thus ending up with a vector containing the subsequence in order?
Whats the point of making two extra tables and wasting so much space?
I'll have to look at the code later, but my question still stands: Why does the algorithm presented in CLRS needlessly waste so much space and is so complicated when the solution I presented in the OP works just as well?
Thought my explanation was clear enough, but okay, here is the pseudocode below
1 2 3 4 5 6 7 8 9 10 11 12
int LCS_LENGTH(X, Y)
{
int m = X.length;
int n = Y.length;
for(i = 1 to m )
for(j = 1 to n)
if(X[i] == Y[j])
push X[i] into a vector
return vector.size();
}
In the book, once they are inside the inner for-loop, they start adding stuff to the two auxillary tables (i.e. matrices) they've made in the function. What im saying is skip all that, and simply add the matching element to the vector.
My pseudocode is exactly what your real code is. Thats exactly my point. Its simple and straightforward code, but the book uses all this extra space by using not one, but TWO 2-dimensional arrays to store positions and arrows.
that means you haven't tried running the program do that and come back here ... (hint: my program which, as you say, is based on your psuedocode doesn't work)
Oh snap! Didn't even realize that. Okay, I was wrong, I should have coded and ran my algorithm first.
@JLBorges: It seems there are 2 algorithms for this problem. You presented the "naive" recursive approach that I was unfamiliar with. The one with tables and arrows uses dynamic programming, and I get now why that is necessary.
Thank you both and apologies for not running my program before posting here.