Science 4.0 is a movement resulting from the formation and use of the World Wide Web. Scientists are using the World Wide Web to collaborate and do science in new ways with significant implications. The term Science 4.0 is a special case of the World Wide Web and influenced by Web 2.0. Therefore it is helpful to understand what the Web 2.0 is in order to better understand Science 4.0. The term Web 2.0 was first used at the O’Reilly Media conference and implies a development in the use of the Web. This has been covered in an earlier article in the series (see Appendix). The Web 2.0 had several characteristics according to this definition and one of these is the central importance of datasets. While the original description of this characteristic is relatively short it also successful predicts future events. The description can be broadly divided into
(a) The use of databases for commercial purposes. The examples given include Mapquest, a company which made significant investments in the generation of databases of maps which could be used in multiple software applications. In this case, ownership of the data was a key factor in the success of the venture.
(b) The transformation of databases. Once a database was generated, the next stage was to transform the data. In the example of Amazon, the data on books was expanded both by Amazon and by a customer base. For instance customer reviews were included in the database and were available on searching Amazon for a particular book.
(c) Proprietary database would cause a reaction which resulted in an open data movement. This specific feature of the definition was particularly prophetic and relevant to Science 4.0 where we are seeing exciting developments with the potential to transform the way science is done.
The process of science has been outlined by philosophers such as Thomas Kuhn who introduced the term ‘normal science’. Kuhn focuses on the contrast between ‘revolutionary science’ and ‘normal science’. He assumes that the reader is familiar with the process of normal science as described in standard school textbooks. In normal science the scientist will generate a hypothesis within the current paradigm. The scientist will devise an experiment to test the hypothesis and in quantitative studies will identify outcome measures. The results are analysed in the context of the null hypothesis. The hypothesis and the experiment are the creation of the scientist.
With Science 2.0, there is another way of doing things which involves the generation of open data-sets. The Alzheimer’s Disease Neuroimaging Initiative is one example (see Appendix). In this case, the scientist is able to access a dataset that has already been generated. Compared to the above, this is a very different way of doing things. The scientist accessing the dataset will be dependent on the creators for the quality of data which in turn is dependent on the methodology. With these caveats, the scientist is still able to generate hypotheses and test them on the dataset. However the hypothesis generation is constrained by the available data. In essence, the scientist chooses from a more limited set of questions. Nevertheless this has many advantages. The overheads are considerably reduced and the turnaround time for publications is also reduced.
However the open science movement is still in a formative stage. Open datasets tend to be restricted according to institutional affiliations constraining the application of the datasets. However things are changing. The UK government has made 40,000 datasets available while Synapse have over 7000 open datasets available at the time of writing. When the full range of resources available for harnessing collective intelligence are applied to open datasets supported by the appropriate infrastructure we will see a rapid maturation in Science 4.0. This will lead to a reevaluation and transformation of the very nature of science itself which will branch into the intuitive and the unintelligible. Unintelligible science will result from the application of augmented and artificial intelligence solutions which will transform the databases but build up more accurate models of reality. A new development will be the study of how these new and successful models have emerged using reverse engineering principles to produce explanations which the human mind is capable of understanding.
Index: There are indices for the TAWOP site here and here Twitter: You can follow ‘The Amazing World of Psychiatry’ Twitter by clicking on this link. Podcast: You can listen to this post on Odiogo by clicking on this link (there may be a small delay between publishing of the blog article and the availability of the podcast). It is available for a limited period. TAWOP Channel: You can follow the TAWOP Channel on YouTube by clicking on this link. Responses: If you have any comments, you can leave them below or alternatively e-mail firstname.lastname@example.org. Disclaimer: The comments made here represent the opinions of the author and do not represent the profession or any body/organisation. The comments made here are not meant as a source of medical advice and those seeking medical advice are advised to consult with their own doctor. The author is not responsible for the contents of any external sites that are linked to in this blog.