kNN vs. SVM (Neural Search)

Created on 2023-07-23T05:15:07-05:00

Return to the Index

This card pertains to a resource available on the internet.

This card can also be read via Gemini.

SVMs seem to do a better job at sorting a search query against documents. But requires training an SVM on the entire dataset, and also requiring more compute to fulfill the search.

I doubt this will be very scalable. But perhaps you can use the kNN implementations to subset the records and then train the SVM on the small remainder to find the best article of the subset?

Value of C: You'll want to tune C. You'll most likely find the best setting to be between 0.01 and 10. Values like 10 very severely penalize the classifier for any mispredictions on your data. It will make sure to fit your data. Values like 0.01 will incur less penalty and will be more regularized. Usually this is what you want. I find that in practice a value like 0.1 works well if you only have a few examples that you don't trust too much. If you have more examples and they are very noise-free, try more like 1.0
Why does this work? In simple terms, because SVM considers the entire cloud of data as it optimizes for the hyperplane that "pulls apart" your positives from negatives. In comparison, the kNN approach doesn't consider the global manifold structure of your entire dataset and "values" every dimension equally. The SVM basically finds the way that your positive example is unique in the dataset, and then only considers its unique qualities when ranking all the other examples.