What is deep in learning?

Depth of leaning is in general mapped to the well-read. After all well-read creates depth in thinking sparks new ideas and aid you to corelate things [as generally referred to as intelligence]. On the other side lies machines to whom we are trying to teach all that we know, for which we create depth for them and coined it as deep learning.

Data in the world is in abundance (relatively) and there surely exists relation among their entities, relations that we map not at first glance and point it clearly, as we can point – ‘as milk is to cat so carrot is to rabbit’, it is more simple function – linear so to say. For some reason there are functions that are too complex to learn and visualize for which we need depth. Layer after layer – memory page after memory page, one adding and increasing the complexity of its prior and marching to learn the complex with sharpness.

What is deep in deep learning, who is leaning and what is that they are learning?


Deep learning models (which are mathematical representation) are build by use of neurons as is our own human system, just they are artificial in existence, after all they don’t have hemoglobin supplying, they the oxygenated blood for functioning properly. This entire model has sharp, planned connection among these neurons which are layered in there functioning and appearance – hence deep. These models to give arguably acceptable response needs data in abundance (relatively), they carefully dismantle the cluttered thread and learns the structure in entirety – so to speak the model learns.

How far this layer stacking brings these math models close to human?

As Gladwell brings in his book Outliers, for intelligence the idea of "threshold" -  have this models have already surpassed the threshold? if so do they need practical intelligence? if not - how deep are they going?

Comments

Popular posts from this blog

THE DUCK CURVE