Categories
Editors Picks Home Categories Science & technology

It’s good to talk – the many methods and levels of communication

PhD student speaks about the differing forms of communication for humans and animals – and her experience of returning to university in her late 40s

If you’d asked me thirty years ago what I’d be doing during my 50th year on the planet, I’m fairly sure my answer wouldn’t have been “studying for a PhD”, but here I am at Middlesex University. I wasn’t able to complete my doctorate some years ago for health and personal reasons, but as the Universe/University has given me another shot at it as an (even more) mature student, I’m doing everything in my power to get it finished this time.

I’m studying how some tiny, insect-like animals of the Order Collembola communicate. Most people know them as springtails, except the ones I’m looking at don’t spring. The mechanism that allows this in most springtail species doesn’t function in my creatures.

One of the most important methods of communication in insects and similar creatures is through pheromones. Pheromones are chemicals secreted by an individual, that are then received and interpreted by others. They can act in a similar way to hormones but outside of the body, affecting the behaviour of receiving creatures. These pheromones trigger many different types of social responses including warning and alarm, trails to show where food is and mating signals.

I am investigating aggregation pheromones. These are used by seashore springtails to signal and form clusters before high tide, to shelter from the incoming waters. My research involves determining what the pheromone is made of, and how its signals are transmitted between the individual animals.

Research on communication plays an enormous role in studies of animal behaviour. Attempts to identify and translate the information exchanged in calls and signalling systems of countless animals of all shapes and sizes are continual, and pheromone or chemical communication is just a tiny part of this. Although non-human animals don’t communicate with what most people would consider language, the boundaries seem to increasingly blur as we learn more and more about the different characteristics of animal communication. And human language itself isn’t just about the words used. We use sounds, tone of voice, gestures, pictures, diagrams as part of our inter-personal communication.

During a recent seminar series, we looked at communication in Science; including who’s doing the communicating – and how. Scientists, teachers, publishers, politicians and policy makers, investors and advertisers, film makers and broadcasters, journalists, museums, the public and even bloggers are all communicating with variable levels of fact and fiction. Amongst other things it made me think about how many different types and levels of communication there are, even within my own small experience of the scientific world.

Although these days I’m usually found reading academic papers and writing thesis-type paragraphs, in the past I spent time as a primary school teaching assistant and a research student tutor, so I’ve had some experience of communicating knowledge to non-specialists. My earliest memories of doing this were with my mum during my undergraduate degree, and even while still at school.

My mum was very artistic having trained as a fashion designer but she was the first to admit she didn’t know much about science. Everything was interesting to her though, and I used to love explaining things to her, going through the stages of cell division, drawing diagrams on bits of scrap paper at the kitchen table. I’ve always thought it was the best way to learn something myself too because explaining something to someone else seems to help things stay in my own head. Sitting in the pub with colleagues sketching ideas on beer mats also seems to work in a similar way.

Speaking of sitting in the pub, we humans are pretty social animals and most of us enjoy being in groups, for work as well as social activities. We don’t even need to words to communicate our ideas to each other most of the time. For example, how often do we convey “call me” with a phone near our ear gesture, or “fancy a drink?” by raising an invisible glass to our face? We use many signals to alter the behaviour of others and coordinate it with our own, as do the springtails. Humans also form groups and clusters for protection.

And for that protection to be more than just short-term and physical, maybe making improvements to science communication could help protect us all through improving our understanding and finding solutions to our problems.

Picture shows Anurida maritima aggregating on the water surface Credit: Evan C, https://uk.inaturalist.org/photos/218292613)

Elise Michele Heinz is a PhD student in the Faculty of Science and Technology.

Categories
Editors Picks Home Categories Science & technology Social commentary

New perspectives on evolutionary theory revealed

Tom Dickins, a Professor of Behavioural Science, has written a new book which explains how popular theories on evolutionary biology have changed in modern times

The Modern Synthesis was a long period of theoretical development in evolutionary biology that began with the invention of population genetics in the 1900s. A key innovation was Fisher’s analogical use of the ideal gas laws to envisage a population of particles, or genes, randomly bumping into one another. Just as with atoms these genes would remain in equilibrium until some external force changed that – natural selection was such a force. This analogy enabled a statistical synthesis between Mendel’s views of particulate inheritance and Darwin’s theory of gradual evolution whilst reinterpreting evolution as changes in gene frequencies within a population.

Evolutionary ideas

The Modern Synthesis also saw the removal of some older evolutionary ideas including those from Lamarck. Lamarck argued that developmental processes could be induced within an individual as a response to the environment. Development could lead to new and useful forms that would be inherited and would thus persist. Darwin had replaced this definition of evolution with one where successful trait variants were retained at the population level because more individuals with those variants survived and reproduced. Unsuccessful variants were removed. But Darwin did allow a role for transformation in the generation of new variation. Population genetics removed Lamarckian transformation entirely relying instead upon genetic variation and the outcome of selection. During the Synthesis additional processes were included, such as genetic drift, but all reorganized the constitution of the population without reliance upon a theory of development.

The role of developmental biology

The removal of Lamarck and the redefinition of evolution as a population level process meant that evolutionary biology was no longer a theory of form. This change has not gone unnoticed, and contemporary scholars have begun to look again at the role of developmental biology in evolution.  Some have explicitly called for an Extended Evolutionary Synthesis that will incorporate mechanistic theories of form. In doing this these scholars are arguing both for new models of the emergence of useful variation and for new types of inheritance, most especially through developmentally induced transgenerational epigenetic effects.

A central argument of the Extended Evolutionary Synthesis is directed toward what is referred to as gene-centrism.  During the latter stages of the Synthesis, in the 1960s, biologists began to model genes as agents whose goal is to replicate across the generations. To achieve this goal, genes contribute to traits that enable the survival and reproduction of the organisms they find themselves in. Genes, as replicators, can span many, many generations. Bodies, as vehicles for those genes, are mortal. This heuristic captured sophisticated mathematical modelling that allowed biologists to address central questions about the emergence of social behaviour, leading to the development of inclusive fitness theory. The replicator-vehicle view saw genes as packets of information, transmitted across generations, and conveying instructions for a developmental program. For many critics this view was taken as preformationist, as an assertion that the gene contained everything and should be privileged in causal models of form.

In my book The Modern Synthesis: Evolution and the Organization of Information (pictured above) I give a detailed history of the Synthesis. I argue that the use of information concepts during this period, and since, has been informal and that this informality has enable a reified view of information to emerge. By this I mean that in colloquial terms scientists have talked as if information is something to be harvested and to be transmitted, and this in turn has allowed a view that information can have causal powers. This has much to do with misinterpretations of Shannon’s 1948 mathematical theory of communication, a theory that emerged just prior to information concepts in biology. Shannon’s work is often treated as a theory of information when in fact it was really a theory of data that enabled the quantification of information. I give the detail of this position and offer what I consider to be a more appropriate interpretation where information is seen as a functional outcome of the relationship between data (as input) and a context (or system) into which it is inputted.

Genes are data

Taking this contextual view of information, I then show how genes, or more precisely DNA codons, are to be seen as data that plays a role in protein synthesis contexts. I show how this view is inherent in the writings of the Modern Synthesis, but also how it enables us to make sense of development within evolutionary biology. Genes are causally prior in developmental sequences, but not conceptually central. Claims that the late-stage Modern Synthesis was gene-centric with respect to development are thus shown to be overwrought.  More technically, I then spend several chapters investigating key aspects of the developmental challenge to the Modern Synthesis, showing how each mechanistic theory is entirely compatible with the Synthesis under a correct view of information.

The central claim of my book is that evolutionary processes enable the organization of information by selecting for data-context relationships. Life is fundamentally informational. But the book is also a defence of the Modern Synthesis, and I close by discussing how such large scale, framework theories act to corral and constrain multiple bespoke theories. Developmental biology is in the business of explaining the development of multiple different systems. It is unlikely that all these theories will cohere under one developmental framework, but as complex data processing systems they can all be made sense of in terms of evolution. In this way evolutionary theory provides an account not of specific forms but instead of the kinds of form that development must deliver.

The book is available online and all Middlesex University staff and students can access the book here.

Top photo by Eugene Zhyvchik on Unsplash

Categories
Science & technology

News from the computational lab – now what?

Giuseppe Primiero, Middlesex UniversityDr Giuseppe Primiero (pictured right), Senior Lecturer in Computing Science and a member of the Foundations of Computing research group at Middlesex University, and Professor Viola Schaffonati, of the Politecnico di Milano, Italy, are working on a philosophical analysis of the methodological aspects of computer science.

In February 2016 science hit the news again: the merger of a binary black hole system was detected by the Advanced LIGO twin instruments, one in Hanford, Washington, and the other 3,000 km away in Livingstone, Louisiana, USA. The signal, detected in September 2015, was famously predicted by Einstein’s general theory of relativity. This phenomenon was also numerically modelled by super-computers since at least 2005 – a typical example of computational experiment.

Computational experiments

The term ‘computational experiment’ is used to refer to a computer simulation of a real scientific experiment. An easier example: to test some macroscopic property of a liquid which is hard to obtain, or where equipment is too expensive to purchase e.g. in an educational setting, a simulation is a more feasible solution than the real experiment. Computational experiments are largely used in several disciplines like chemistry, biology and the social sciences. As experiments are the essence of scientific methodology, indirectly, computer simulations raise interesting questions: how do computational experiments affect results in the other sciences? And what kind of scientific method do computational experiments support?

These questions highlight the much older problem of the status and methodology of computer science (CS) itself. Today we are acquainted with CS as a well-established discipline. Given the pervasiveness of computational artefacts in everyday life, we can even consider computing a major actor in academic, scientific and social contexts. But the status enjoyed today by CS has not always been granted. CS, since its early days, has been a minor god. At the beginning computers were instruments for the ‘real sciences’: physics, mathematics, astronomy needed to perform calculations that had reached levels of complexity unfeasible for human agents.

Computers were also instruments for social and political aims: the US army used them to compute ballistic tables and, notoriously, mechanical and semi-computational methods were at work in solving cryptographic codes during the Second World War.

The UK and the US were pioneers in the transformation that brought CS into the higher education system: the first degree in CS was established at the University of Cambridge Computer Laboratory in 1953 by the mathematics faculty to meet the request of competencies in mechanical computation applied to scientific research. It was followed by Purdue University in 1962. The academic birth of CS is thus the result of creating technical support for other sciences, rather than the acknowledgement of a new science. Subsequent decades brought forth a quest for the scientific status of this discipline. The role of computer experiments as they are used to support results in other sciences, a topic which has been largely investigated, seems to perpetrate this ancillary role of computing.

The collision of two black holes holes—a tremendously powerful event detected for the first time ever by the Laser Interferometer Gravitational-Wave Observatory, or LIGO—is seen in this still from a computer simulation. Photo by the SXS (Simulating eXtreme Spacetimes) Project.
The collision of two black holes holes – a tremendously powerful event detected for the first time ever by the Laser Interferometer Gravitational-Wave Observatory, or LIGO – is seen in this still from a computer simulation. Photo by the Simulating eXtreme Spacetimes Project.

A science?

But what is then the scientific value of computational experiments? Can they be used to assert that computing is a scientific discipline on its own? The natural sciences have a codified investigation method: a problem is identified; a predictable and testable hypothesis is formulated; a study to test the hypothesis is devised; analyses are performed and results of the test are evaluated; on their basis, the hypothesis and the tests are modified and repeated; finally, a theory that answers positively or negatively to the hypothesis is formulated. One important consideration is therefore the applicability of the so-called hypothetical-deductive method to CS. This, in turn, hides several smaller issues.

The first concerns the qualification of which ‘computational problems’ would fit such method. Intuitively, when one refers to the use of computational techniques to address some scientific problem, the latter can come from a variety of backgrounds. We might be interested in computing the value of some equations to test the stability of a bridge. Or we might be interested in knowing the best-fit curve for the increase of some disease, economic behaviour or demographic factor in a given social group. Or we might be interested in investigating a biological entity. These cases highlight the old role of computing as a technique to facilitate and speed-up the process of extracting data and possibly suggest correlations within a well-specified scientific context: computational physics, chemistry, econometrics, biology.

An essential characteristic of scientific experiments is their repeatability.

But besides the understanding of ‘computational experiment’ as the computational study of a non-computational phenomenon, the computational sciences themselves offer problems that can be addressed computationally: how stable is your internet connection? How safe is your installation process when external libraries are required? How consistent are the data extracted from some sample? Just to outline some. These problems (or their formal models) are investigated through computational experiments, but they seem to be less easily identified with scientific problems.

The second: how to formulate a good hypothesis for a computational experiment? Scientific hypotheses depend on the system of reference and, in the case of their translation to a computational setting, we have to be careful that the relevant properties of the system under observation are preserved. An additional complication is presented when the observation itself concerns a computational system, which might include a formal system, a piece of software, or implemented artefacts. Each of the levels of abstraction pertaining to computing reveals a specific understanding of the system, and they can all be taken as essential in the definition of a computing system. Is then a hypothesis on such systems admissible if formulated at only one such level of abstraction e.g. considering a piece of code but not its running instances? And is such an hypothesis still well-formulated enough if it tries instead to account for all the different aspects that a computational system present?

Finally, an essential characteristic of scientific experiments is their repeatability. In computing, this criterion can be understood and interpreted differently: should an experiment be repeatable under exactly the same circumstances for exactly the same computational system? Should it be repeatable for a whole class of systems of the same type? How do we characterize typability in the case of software? and how in the case of hardware?

Irregularities

All the above questions underpin our understanding of what a computational experiment is. Although we are used to expecting some scientific uniformity in the notion of experiment, the case of CS evades such strict criteria. First of all, several sub-disciplines categorise experiments in very specific ways, each not easily applicable by the research group next-door: testing a piece of software for requirements satisfaction is essentially very different from testing a robotic arm for identifying its own positioning.

Experiments in the computational domain do not offer the same regularities that can be observed in the physical, biological and even social sciences. The notion of experiments is often confounded with the more basic and domain-related activity of performing tests. For example, model-based testing is a well-defined formal and theoretical method that differs from computer-simulations in both admissible techniques, recognised methodology, assumptions and verifiability of results. Accordingly, the process of checking an hypothesis that characterises the scientific method described above is often intended simply as testing or checking some functionality of the system at hand, while in other cases it implies a much stronger theoretical meaning. Here the notion of repeatability (of an experiment) merges with the replicability (of an artefact) – a distinction that has already appeared in the literature (Drummond).

Finally, benchmarking is understood as an objective performance evaluation of computer systems under controlled conditions: is it in some sense characterising the quality of computational experiments, or simply identifying the computational artefacts that can be validly subject to experimental practices?

A philosophical analysis

The philosophical analysis on the methodological aspects of CS, of which the above is an example, is a growing research area. The set of research questions that need to be approached is large and diversified. Among these, the analysis on the role of computational experiments in the sciences is not a new one, though less understood is the methodological role of computer simulations in CS, rather than as a support method for testing hypotheses in other sciences.

The Department of Computer Science at Middlesex University is leading both research and teaching activities in this area, in collaboration with several European partners, including the Dipartimento di Elettronica, Informazione e Bioingeneria at Politecnico di Milano in Italy, which offers similar activities and has a partnership with Middlesex through the Erasmus+ network.

In an intense one-week visit, we drafted initial research questions and planned future activities. The following questions represent a starting point for our analysis:

  • Do experiments on computational artefacts (e.g. a simulation of a piece of software) differ in any significant way from experiments performed on engineering artefacts (like a bridge), social (a migration) or physical phenomena (fluid dynamics)?
  • Does the nature of computational artefacts influence the definition of a computational experiment? Or in other words, is running an experiment on a computer significantly different than running it in a possibly smaller-scale but real-world scenario?
  • Does the way in which a computational experiment is implemented influence the validity and generality of its results? In which way does the coding, its language and choice of algorithms affect the results?

These questions require considering the different types of computer simulations, as well as other types of computational experiments, along with the specificities of the problems treated. For example, an agent-based simulation of a messaging system underlies problems and offers results that are inherently different from the testing with real users of a monitoring systems for privacy on social networks. The philosophical analysis on the methodological aspects of CS impacts not only the discussion about the discipline, but also on how its disciplinary status is acknowledged by a larger audience.

Nowadays we are getting used to reading about the role of computational experiments in scientific research and how computer-based results affect the progress of science. It is about time that we become clear about their underlying methodology, so that we might say with some degree of confidence what their real meaning is.

Categories
Health & wellbeing

Why do some people develop depression?

Dr Zola Mannie Middlesex UniversityMental health has become a prominent topic of discussion in this year’s general election campaign, with Liberal Democrat leader Nick Clegg pledging “equality for people with mental health issues” and £3.5 billion of additional funding for care.

But while this money would be used for new facilities, reducing waiting times and improving access to treatment – all cures – prevention is not addressed. Middlesex University’s Dr Zola Mannie, a Research Fellow in the Faculty of Science and Technology, is one of those seeking to answer the important question:’Why do some people develop depression?’

In my research I am investigating young people aged 18-21 years who have never been depressed. Within this age range I am interested in comparing those who have experienced childhood adversity and/or those with a depressed parent with those who have not experienced adversity or parental depression. By childhood adversity, I mean events such as parental/familial neglect, maltreatment and various forms of abuse before the age of 17 years.

Although it is known that childhood adversity or parental depression increase the risk of developing depression, they do not seem to be sufficient to cause its onset. That means, not everyone exposed to these events/situations will get depressed – some will, but a significant number will not. The question is why? It seems that there may be other important factors – both neurobiological and psychological – involved and that is the basis of my research.

Depression - Ryan Melaugh
Depression – Photo by Ryan Melaugh (Creative Commons 2.0)

Could BDNF hold the key?

I am particularly interested in a protein called brain derived neurotrophic factor (BDNF), which is involved in neuroplasticity – the ability for the brain to strengthen or weaken neural connections as a result of changes in the environment, behaviour, cognitive and emotional processes or injury. The brain responds to these events throughout life, but how it responds will be partly influenced by BDNF expression and levels.

BDNF is therefore important for processes such as learning and memory, among others. Extremely high or low levels of BDNF impair learning and memory processes. I am interested in whether there are differences in BDNF production, learning and memory between those who have childhood adversity and/or parental depression compared to those without parental depression or childhood adversity.

If my research reveals that those at increased risk may produce less BDNF than those at low risk, it could be a step forward to our understanding of depression risk. The next step would be to test whether low BDNF and associated learning and memory problems can predict who will get depressed through further research. Although animal models of vulnerability to depression show that it can predict depressive-like behaviours, this has not yet been shown in humans.

The good news is that BDNF can be modified through behaviours such as physical exercise, energy restriction or cognitive training. Ultimately, through this type of research, interventions aimed at reducing the incidence of depression can be designed, and BDNF, learning and memory are potential targets.

Zola is looking for volunteers to assist with her research. Click here for more information if you would like to take part.