This is the final part in a series looking at Science 4.0 (see Appendix for the other articles in the series). I have used the term ‘Science 4.0’ to describe the effects that the internet and more specifically the World Wide Web can have on the culture of science. Furthermore Science 4.0 is defined in terms of the nuances of licensing of the products of this approach. In looking at the potential effects of the World Wide Web on science I have turned to Tim O’Reilly’s definition of Web 2.0 that was presented at the O’Reilly Media Conference and has been very influential in this area. The Web 2.0 definition focuses on how the World Wide Web has evolved as people have adapted to the web and also as technologies have developed. O’Reilly lists seven features of Web 2.0. The seventh feature is that Web 2.0 brings rich user experiences and in this post I will look at how this might apply to science.
In the Web 2.0 definition the subject of rich user experiences is discussed in relation to a number of technical factors. I have interpreted this discussion as follows. The discussion focuses on the end-user who has a ‘client’ computer. Although the client computer can have completed software installed or else rely on the server to perform the processing there is also a third option. This is to have software installed which can translate pieces of code sent from the server. There is no need to download this code onto the client computer’s hard drive, but instead it can run in the temporary memory storage area of the client computer only when it is needed. The examples given are Flash, Java and also AJAX which refers to a set of technologies used by Google for instance in GMail and GoogleMaps. However it was not just these technologies but the ability to share a variety of data formats between the client computer and the server and other computers on the network.
These are the technical considerations and the result is that the end user has richer experiences. Data can be contextualised and transformed in many different ways. The key question for this discussion is how can such experiences be relevant to science? Crudely speaking the core features of science are that a body of knowledge about the world is accumulated through the experimental method although this has been considerably expanded on by philosophers such as Thomas Kuhn (although technically speaking Kuhn describes his work as an exposition of the historiography of science. Also see the Appendix for a discussion of his work). Taking this crude definition as a basis for the further discussion we can ask the more specific question ‘how can an understanding of the World Wide Web resulting from the Web 2.0 definition relate to the advancement of scientific knowledge through the experimental method?’. The answer to this more specific question which I have assumed to refer to ‘normal science’ according to Kuhn’s definition can be divided into 9 categories which include solutions to the resources problems that need to be overcome in order to advance scientific knowledge.
1. The storage of scientific data and knowledge. Prior to the World Wide Web, information was stored electronically e.g Medline. However the World Wide Web as well as the Deep Web have made available a global network of server repositories of scientific data (see this index as an example), scientific papers (e.g BMC), scientific journals including those that are Open Access (e.g PLoS ONE), discussions amongst scientists in social media forums
(e.g Nature Network) and presentation of scientific work (e.g the MRC YouTube Channel). The transformation of data into knowledge can be facilitated by means of Web 2.0 applications. For instance Wolfram Alpha enables a person to upload data and the Wolfram Alpha engine will undertake analysis of that data. The resulting knowledge can be disseminated by any of the above means within the meaning of Web 2.0. More data is becoming available. For instance the UK government is making public body data available for further analysis.
2. The organisation of scientific data and knowledge. Various reference manager programs are available which enable people to organise scientific papers. A number of these have social media variations which facility collaboration in this organising process. Science blogs may be indexed according to category which means that they become searchable in many different ways.
3. The analysis of data. Again the Wolfram Alpha search engine provides a clear example of how Web 2.0 applications can facilitate data analysis and enable a variety of different types of analysis.
4. Collaboration of scientists. The Alzheimer’s Disease Neuroimaging Initiative is a Deep Web application in the sense that applicants must apply for access to the data. The raw dataset includes biomarker data for people with Alzheimer’s Disease as well as other groups. However this has been incredibly successful not only in facilitating the generation of hypotheses and the publication of papers advancing knowledge but also prompting further projects along these lines.
5. Crowdfunding resources for science projects. Science projects require funding and a novel Web 2.0 solution is crowdfunding which brings together people to fund projects they consider worthwhile (e.g Crowdfunding.co.uk). Prior to this approach funding streams would occur offline with highly structured application processes. Crowdfunding is a very flexible approach to funding which allows the scientist to adopt a range of approaches to secure funding.
6. Publication of scientific papers. As discussed above many journals are becoming Open-Access providing a Web 2.0 solution to accessing scientific papers and disseminating scientific results. The addition of supplementary material as well as search facilities make accessing a journal a much richer experience (technically speaking subscription journals/articles would be referred to as the Deep Web).
7. Dissemination of scientific research. Here social media comes into its own. Research is disseminated in the full gamete of social media forums including Twitter, Friendfeed, YouTube, science blogs as well as a number of other social media forums. There are also social media platforms which aggregate this data (e.g Science Seeker). These social media platforms are evolving quickly and are transforming the experiences of people accessing the results of science research.
8. Recruitment of scientists. There are a number of Web 2.0 resources for recruitment of scientists ranging from LinkedIn to general science publications as well as the more specialised journals. Again Web 2.0 solutions bring a fast moving, flexible approach in this area.
9. Recruitment of subjects. This perhaps is one of the more controversial areas and is subject to regulation depending on the area of study (e.g the FDA has very specific guidelines about recruitment of research subjects).Web 2.0 approaches have the potential to transform this area within the regulatory framework.
There are most likely many more ways in which Web 2.0 applies to science which have not been mentioned here but what is evident is that already the Web 2.0 culture has been transformative for science.
Appendix – Science 4.0 Articles on the TAWOP Site
Appendix 2 – Discussion of Thomas Kuhn on the TAWOP Site
Index: There are indices for the TAWOP site here and here Twitter: You can follow ‘The Amazing World of Psychiatry’ Twitter by clicking on this link. Podcast: You can listen to this post on Odiogo by clicking on this link (there may be a small delay between publishing of the blog article and the availability of the podcast). It is available for a limited period. TAWOP Channel: You can follow the TAWOP Channel on YouTube by clicking on this link. Responses: If you have any comments, you can leave them below or alternatively e-mail firstname.lastname@example.org. Disclaimer: The comments made here represent the opinions of the author and do not represent the profession or any body/organisation. The comments made here are not meant as a source of medical advice and those seeking medical advice are advised to consult with their own doctor. The author is not responsible for the contents of any external sites that are linked to in this blog.