If we look back at the history of computing, we can see that IT technologies were initially very bound to the capabilities of the hardware, but became more intuitive and „human“ over the past decades:
- In the beginning (see the Zuse Z3 above), programmers had to physically interact with the technology via pushing and pulling registers or punch cards.
- In the 70ies and 80ies of the last century assembler programming became more prevalent, where you still interacted relatively close with the physical hardware, but at least it was not a physical exercise anymore.
- Computer Scientists then discovered, that there are more intuitive ways to interact with computers and got inspired by cooking recipes: functional/procedural programming of the 80ies and 90ies (remember PASCAL or C) basically resembled what cookbooks have done for centuries: describing the ingredients and a sequence of steps to realize a certain outcome.
- Even more intuitive was object-oriented programming in the 90ies and 2000er years, where large amounts of sourcecode was not expressed in lengthy spaghetti-code but organized intuitively in objects and methods. This programming paradigm can already be seen as being inspired how our brain sees the real world – we learned concepts of abstract objects (abstract entities such as cars, trees or buildings) and see their realization/instantiation in reality. Also, object have certain characteristics (size, color, shape) and functions, which correspond to data and methods associated with the objects.
This, however, is not the end of the development. The problem with object-oriented programming is that functions and methods are more dominant and the data is often deeply hidden in the code or in data silos where the structure of the data is only known to a few experts. We currently see that there is increasing attention to data (e.g. big data, smart data, data science) – data is becoming more and more a first class citizen of computing. Still many challenges are ahead of us to realize the vision of cognitive data. We need to find and use more intuitive representations of data, which capture their structure and semantics in machine and human comprehensible ways, so that we develop a common understanding of the data along use cases, organizations, applications, value chains or domains. Knowledge graphs, linked data and semantic technologies are good candidates in this regard.
Text by: Prof. Dr.
Sören Auer's blog on tib.eu: Link
Sören Auer's articles on LinkedIn: Link