Collective intelligence draws on collaboration between individuals. It exploits their singularities, raises them to power, and creates, in the end, a real holistic system, much more valuable than the sum of its parts. Collective intelligence provides value for all, and it is not a framework for comparing individuals, or even worse, for discriminating against them. Collective intelligence gives strength to differences, distilling from each individual the contributions which then become food for the collective. (For an introduction to collective intelligence, see Collective Intelligence: Mankind’s Emerging World in Cyberspace, by Pierre Lévy, translated by Robert Bononno.)
Collective Intelligence and the Human Network
Collective intelligence draws its power from the network. It is a network that is rich in singularities, each different from the others, and one in which mutual connections allow for comparison and exchange, stimulating continuous adaptation, just as an organism continually adapts to its environment.
We can see a network of this type as the projection of a neural network, in which neurons are replaced by people and the synapses by interpersonal relationships. In the network that powers collective intelligence, not only does the strength of connections change over time, but the topology of the network also changes, with connections that break and are rebuilt elsewhere, with nodes that are turned off and replaced by others. This type of network is extremely fluid and highly adaptable to the context, with connections that are not purely numerical, but also emotional, because what are exchanged among the actors of the network are complex artifacts.
The Requirements for Collective Intelligence
To produce collective intelligence, we need, first of all, the strength to distribute the elements of its network, to enhance each singularity and mutual connection. On the other hand, we also need an underlying level of universal meaning. This is often taken for granted, but it is the basic element that supports collective intelligence, making it possible to understand what travels on the network.
If, for collective intelligence, it seems utopian to think in terms of a universal objectification of meaning, it is not, especially when considering the modern corporation, in which we need to balance the individual with the communicative substrate, so as to have a reasonable certainty that what is communicated is actually understood.
To implement collective intelligence we actually have a triple need:
- To arrive at a conceptualization that can be considered reasonably common to all those who are part of the network
- To make sure that such a conceptualization is open to ongoing evolution
- To enable individuals to derive specialized versions of it
In Search of a Universal Semantic Model
Simple representation will not be sufficient, however. For collective intelligence, any representation of universal meaning must demonstrate the speed with which it can be adapted and the ease with which it can be investigated. Since the world is changing quickly, collective intelligence needs to be able to adapt to it with the same speed. Whatever model is chosen to represent the universal semantic base, it must be able to sustain this speed and not be an obstacle to it.
Specific organizations, guided by specific business objectives, which represent a set of guidelines on the possible conceptualizations of the domain in which the company operates, need to be able to define a reference semantic model that is both shared and extensible. Such a model would enable each individual or department in the company to define new, more specific models, so that the models can be directly applied to the tasks of every individual in the company. This would provide a strong foundation for collective intelligence.
Data Virtualization and the Universal Semantic Model
Data virtualization facilitates the creation of universal data models, a capability that often takes second place to the more technical capability of connecting heterogeneous data sources. But without the former capability, we would have incomplete operations that would never be able to express their full potential, exactly like knowledge without its awareness, since knowing and knowing-to-know are two syntactically similar expressions that are semantically very different (As Nicolaus Copernicus said, “to know that we know what we know, and to know that we do not know what we do not know, that is true knowledge.”)
The abstraction of data performed by data virtualization is the acknowledgment of two distinct moments: the first, when even before implementing any cognitive processes, one must understand the world on which these processes will be directed (a semantic model, from this point of view, is a conceptualization of the world to which it refers), which is a purely “logical” phase; and the second, when the triggering of these processes takes place. In this second moment, the cognitive processes give the conceptualization dynamism and produce, by virtue of their conclusions, an enrichment of the conceptualization itself, thanks to the connection of the logical components with the factual ones, creating that intimate link between the intentional components defined in the conceptualization and their correspondent extensional realizations. (Every cognitive process is a continuous rebound between a logical component, which operates on conceptualization, and a physical one, which connects it to its manifestations in the observed world. This is the rebound that makes learning possible, as it is based on the continuous comparison between what we know and what we really observe, which triggers us to reconsider our knowledge.)
This connection between the two components — intentional and extensional — is fundamental for a real implementation of a collective model of intelligence, given that a conceptualization alone, intended as a mere descriptive exercise (an approach adopted by some families of solutions that propose to formalize the “meaning,”), does not provide concreteness to the whole, leaving to others the task of creating the connections, or linking the concepts to their realizations. In addition to constituting an undue burden, such a solution exposes organizations to inconsistencies and ambiguities; if such a task is left to individuals, organizations need to accept that individuals’ subjectivity can influence the result.
The way in which data virtualization operates, in particular by not physically moving data as long as there is no need, also satisfies the requirements of speed and agility, which are necessary to avoid slowdowns in a profoundly changing model, and to enable continuous adaptation to the reality on which the model projects itself.
The Foundation for Collective Intelligence
Collective intelligence is a disruptive phenomenon, made possible by breaking down barriers to communication. It is a phenomenon that transmits value on any scale, in any company, large or small. But communication always has two dimensions: the fabric that connects all those who are part of it, and what is transmitted over that fabric. If the fabric is technological, then the content that travels on it is in a conceptual dimension that cannot be separated from a common, shared semantic model. Such a model forms the basis on which we can graft and trigger higher cognitive processes that are the ultimate manifestation of collective intelligence.
I believe that data virtualization should be evaluated and understood from this point of view, as the best possible way to give life to a real data-sharing world, one that is not exclusively technical but is, above all, conceptual and data-driven. Data is never an end in itself, but always at the service of others, consumed to produce information, knowledge, and wisdom (The three remaining elements in the DIKW [Data, Information, Knowledge and Wisdom] Pyramid). Ultimately, data is shared to ensure that everything that derives from it results in a common heritage.
- Data Integration: It’s not a Technological Challenge, but a Semantic Adventure - February 15, 2024
- Data Ecology - December 22, 2022
- Data Virtualization: Be the Driver, not the Mechanic - January 5, 2022