Cartesian Thinking and the tryranny of algorithms

http://parisinnovationreview.com/2013/03/15/big-data-cartesian-thinking/

“Induction allows us to generalize a phenomenon, even if it is observed only once. This logic, despite being so fundamentally human, is foreign to engineers and scientists trained in Cartesian epistemology. This explains a number of confusions that interfere with the understanding of Big Data. Some see induction as a form of statistics and mistake the search for singularity with a more refined segmentation of elements that are obtained statistically. Some even speak of intuition when describing induction.

In all of these cases, the confusion stems from a desire to compare different principles in identical fields. In fact there are areas where deduction is quite efficient and others where induction is required. It would be completely pointless to try and apply induction where deduction is effective and relevant… The opposite is also true. These two tools cannot be compared and in a sense, they aren’t even competitors. True wisdom is knowing about when to use the right tool in the right circumstances.

This duality is reflected in the temporal approach to analysis. Deduction, statistics or probability can be fueled once, with several years of data, to establish a “law” (i.e. a repeatable result). Induction, however, is a continuous technique that takes time. It is an ongoing process that will generate singularities, expand their base and measure the effectiveness of their application.

Inductive reasoning can’t be reduced to one single model. It depends on previous inductions and detected singularities. It is not repeatable. Again, it is far from Cartesian principles. Induction does not need complete and consistent parameters, since our brain will process them only partially, by concentrating on what it considers to be the critical information on the situation. On the other hand, this method will also produce errors.”