Categories
Science & technology

When’s a toothbrush not just a toothbrush?

Joanna Adler, Director of Forensic Psychological Services at Middlesex University, explores the increasing connectivity of technology and highlights some ways in which the internet of things may present new challenges for parents.

In November 2016, I bought an electronic toothbrush for a child to encourage independent, effective tooth brushing. The child loves it, and oral hygiene has definitely improved. Twice daily, a parent turns on smartphone location and Bluetooth settings so that the brush app can interact directly with the toothbrush to show the child where to move it. These settings only allow the phone and toothbrush to ‘communicate’ with one another, but twice daily, there’s that security vulnerability, from an app designed for children from age four. There are options to avoid the pairing entirely, but you lose personalisation. There are also options to commit this toothbrush fully to the internet of things, running it across all desired devices.

We like seamless connectivity, its convenience and ease. I wrote this post on 5 January 2017, and that morning there was an infectiously enthusiastic BBC report from CES 2017. Rory Cellan-Jones told us how much fun he was having doing yoga with a robot, and about the next step in those toothbrushes, where AI will power them to process personal data to improve our dental health. It also considered how far driverless cars have progressed. The BMW spokesperson enticed us with the idea that a ‘smart home’ could ‘drive with you’ – a film you started watching before leaving home could be played back to you in your car, which would automatically go into ‘cinema mode’, something that could legally happen for rear passengers already.

philips-communications-cc-by-nc-nd-2-0_cropped
image credit: Philips Communications, CC BY-NC-ND 2.0

Protecting our privacy

Wearable technologies and digitally enhanced products can improve health and autonomy. Our real and virtual worlds are already meshing ever more smoothly, and that’s what we apparently want. Even though I like the oxymoronic elegance of a smart dummy, and can appreciate the potential irony of using Barbie dolls to launch DDoS (Distributed Denial of Service) attacks, the internet of hacked things is not going away. It will impinge more on the services and parts of society that we care about, and neither industry nor governments seem to be acting with any kind of alacrity to crack down on default passwords or improve privacy, while most of the rest of us just like getting stuff for free.

Our data

If we’re not paying for something that costs something to produce, how is it funded? We’re the data. Ad-tracking and algorithm-based filtering are based on our choices, but are not neutral. We pay for all the stuff we like to access or own ‘for free’, through our privacy, our preferences and those of our children – data from phone beacons, search engines, streaming selections, the outcomes of conscious and non-conscious decisions that we make on devices and that we pass on, time and again, without thinking. The extent to which that’s a problem depends on who is getting data, about whom, the security of data, the extent and efficacy of anonymisation, the uses to which the data are put, whether or not users have genuinely chosen to pass on such data, whether or not they are legally empowered even to consent to their data being used in such ways, and the impacts this all has on us. These questions are all raised by the implications and enactment of the General Data Protection Regulation (GDPR).

Ad blocking

The Children’s Commissioner for England has also recently added her voice to concerns about the advertisements served up to children and young people, noting that they often can’t tell when something is an ad. This chimes with wider concerns about children’s media literacy. So what about ad-free sites? Paid-for subscription services are one way to proceed, but they are only available in certain jurisdictions and exclude those who can’t or won’t pay for the services. We can use ad-blocking software on free services, and if everyone used ad blockers, that really would disrupt the internet business model. But how likely is it that we’re all going to embrace an approach that means we start paying for things we think we’re currently getting for free?

Google are quite proud of YouTube Kids, and tell us that it’s more responsible, a place for under-13s to browse safely with a professed family-oriented advertising policy, which is freely available. It has two ‘prohibited’ sets of advertising – the first is ‘restricted’, which is apparently not as ‘prohibited’ as the other set, which is ‘strictly prohibited’. Both only pertain to adverts directly paid for with Google: ‘Content uploaded by users to their channels are not considered Paid Ads.’ But that’s okay, because if there’s a really annoying channel that keeps popping up, like a pre-school child reviewing toys, we can use parental filters and block channels that we don’t want children to see. Ah, no, apparently not in the UK. Maybe our legislation hasn’t made that a necessary operating principle.

google-t-and-cs-screenshot

A new social contract

That’s the nub of this problem: legislation will never be as swift or as agile as exponentially developing technology. However, if legislators abrogate their duties, there can be no commercial incentives to act more ethically, and there won’t be independent scrutiny. That is why both the Digital Economy Bill and the GDPR are so important, but also why each may be insufficient.

A new social contract and regulatory framework for the rapidly evolving digital ecosystem require us to play our parts too. I have highlighted YouTube Kids here, but pick any major player you like. None of them should be expected to be our personal ethical filters, and we can’t entrust parental responsibilities to any entity that has its own duties to employees, shareholders or trustees. The Children’s Commissioner’s report suggests a digital ombudsman to mediate between young people and social media companies, although the scope and remit of such a role is not entirely clear. The report also pushes forward the 5Rights for young people and calls for a joined-up programme to build digital resilience through digital citizenship. It joins calls to review the UN Convention on the Rights of the Child to include digital rights and protections.

This is ambitious and makes sense, but as with real-world rights, these measures can only offer protection if they are implemented and if we all understand their consequences, both when flouted and when enacted. Meanwhile, maybe we could think a little more before turning on location settings, and if an internet of things device has an unchangeable password, it won’t be getting room in my bathroom cabinet anytime soon.

This article originally appeared on Parenting for a Digital Future

Categories
Science & technology

Can AI help combat fake news?

Professor Kurt Barling Middlesex UniversityFormer BBC reporter and Professor of Journalism Kurt Barling argues that artificial intelligence may provide an antidote to the unverified facts and hyperbole running rampant in online journalism. 

When I started in TV journalism three decades ago pictures were still gathered on film. By the time I left the BBC in 2015 smartphones were being used to beam pictures live to the audience. Following the digital revolution and the rise of online giants such as Facebook and Google, we have witnessed what Joseph Schumpeter described as the “creative destruction” of the old order and its replacement by the innovative practices of new media.

There has been a great deal of furious – and often hyperbolic – discussion in the wake of the US election, blaming the “echo-chamber” of the internet – and Facebook in particular – for distorting political discourse and drowning the online public in ‘fake news’. Antidotes are now sought to ensure that “truth filters” guard the likes of Facebook – and its users – from abuse at the hands of con artists wielding algorithms.

Students in the town of Veles, Macedonia where reports say hundreds of websites are churning out ‘fake news’ designed to appeal to Donald Trump supporters on social media (Image: EPA/Georgi Licovski)

Facebook and Google are now the big beasts of the internet when it comes to distributing news – and as they have sought to secure advertising revenue, what has slowly but surely emerged is a kind of “click-mania”. This is how it works: the social media platforms and search engines advertise around news stories, which means that the more clicks a story gets the more eyeballs see the social media sites’ advertising, which generates revenue for them. In this media environment, more clicks mean more revenue – so the content they prioritise is inevitably skewed towards “clickbait” – stories chosen for their probability of getting lots and lots of readers to click on them. Quality and veracity are low on the list of requirements for these stories.

It is difficult to argue that this did not impact on online editorial priorities with hyperbolic headlines becoming ever more tuned to this end. At times, on some platforms, it resulted in what Nick Davies dubbed “churnalism”, whereby stories were not properly fact-checked or researched.

Erosion of trust

Consumption patterns are inevitably affected by all this creative destruction and social media sites have quickly replaced “the press” as leading sources of news. And yet there is the danger that the resulting information overload is eroding trust in information providers.

Photo published for President Obama: Fake News Threatens Freedom

The outgoing US president, Barack Obama, captured the dilemma the public faces on his recent trip to Germany:

If we are not serious about facts and what’s true and what’s not, if we can’t discriminate between serious arguments and propaganda, then we have problems.

There is a renewed recognition that the traditional “gatekeepers” – journalists working in newsrooms – do provide a useful filter mechanism for the overabundance of information that confronts the consumer. But their once-steady advertising revenues are fast being rerouted to Facebook and Google. As a result, traditional news companies are bleeding to death – and the currently popular strategy of introducing paywalls and subscriptions are not making up the losses. Worse still, many newspapers continue to suffer double-digit falls in circulation, so the gatekeepers are “rationalised” and the public is the poorer for it.

Rise of the algorithm

One of the answers lies in repurposing modern newsrooms, which is what the likes of the Washington Post are doing under its new owner Jeff Bezos. Certainly, journalists have to find ways of encouraging people to rely less on, or become more sceptical of, using social media as their primary source for news. Even Facebook has recognised it needs to do more to avoid fakery being laundered and normalised on its platform.

So how to avoid falling for fakery? One option involves the use of intelligent machines. We live in a media age of algorithms and there is the potential to use Artificial Intelligence as a fundamental complement to the journalistic process – rather than simply as a tool to better direct advertising or to deliver personalised editorial priorities to readers.

Software engineers already know how to build a digital architecture with natural language programming techniques to recognise basic storylines. What is to stop them sampling a range of versions of a story from various validated sources to create a data set and then use algorithms to strip out bias and reconstruct the core, corroborated facts of any given event.

Aggregation and summation techniques are beginning to deliver results. I know of at least one British tech start-up that, although in the research and development phase, has built an engine that uses a natural language processing approach to digest data from multiple sources, identify a storyline and provide an artificially intelligent summary that is credible. It’s a question of interpretation. It is, if you will, a prototype “bullshit detector” – where an algorithmic solution mimics the old-fashioned journalistic value of searching for the truth.

If we look at the mess our democracies have fallen into because of the new age of free-for-all information, it is clear that we need to urgently harness artificial intelligence to protect open debate – not stifle it. This is one anchor of our democracies that we cannot afford to abandon.

This article originally appeared on the The Conversation.

Categories
Science & technology

News from the computational lab – now what?

Giuseppe Primiero, Middlesex UniversityDr Giuseppe Primiero (pictured right), Senior Lecturer in Computing Science and a member of the Foundations of Computing research group at Middlesex University, and Professor Viola Schaffonati, of the Politecnico di Milano, Italy, are working on a philosophical analysis of the methodological aspects of computer science.

In February 2016 science hit the news again: the merger of a binary black hole system was detected by the Advanced LIGO twin instruments, one in Hanford, Washington, and the other 3,000 km away in Livingstone, Louisiana, USA. The signal, detected in September 2015, was famously predicted by Einstein’s general theory of relativity. This phenomenon was also numerically modelled by super-computers since at least 2005 – a typical example of computational experiment.

Computational experiments

The term ‘computational experiment’ is used to refer to a computer simulation of a real scientific experiment. An easier example: to test some macroscopic property of a liquid which is hard to obtain, or where equipment is too expensive to purchase e.g. in an educational setting, a simulation is a more feasible solution than the real experiment. Computational experiments are largely used in several disciplines like chemistry, biology and the social sciences. As experiments are the essence of scientific methodology, indirectly, computer simulations raise interesting questions: how do computational experiments affect results in the other sciences? And what kind of scientific method do computational experiments support?

These questions highlight the much older problem of the status and methodology of computer science (CS) itself. Today we are acquainted with CS as a well-established discipline. Given the pervasiveness of computational artefacts in everyday life, we can even consider computing a major actor in academic, scientific and social contexts. But the status enjoyed today by CS has not always been granted. CS, since its early days, has been a minor god. At the beginning computers were instruments for the ‘real sciences’: physics, mathematics, astronomy needed to perform calculations that had reached levels of complexity unfeasible for human agents.

Computers were also instruments for social and political aims: the US army used them to compute ballistic tables and, notoriously, mechanical and semi-computational methods were at work in solving cryptographic codes during the Second World War.

The UK and the US were pioneers in the transformation that brought CS into the higher education system: the first degree in CS was established at the University of Cambridge Computer Laboratory in 1953 by the mathematics faculty to meet the request of competencies in mechanical computation applied to scientific research. It was followed by Purdue University in 1962. The academic birth of CS is thus the result of creating technical support for other sciences, rather than the acknowledgement of a new science. Subsequent decades brought forth a quest for the scientific status of this discipline. The role of computer experiments as they are used to support results in other sciences, a topic which has been largely investigated, seems to perpetrate this ancillary role of computing.

The collision of two black holes holes—a tremendously powerful event detected for the first time ever by the Laser Interferometer Gravitational-Wave Observatory, or LIGO—is seen in this still from a computer simulation. Photo by the SXS (Simulating eXtreme Spacetimes) Project.
The collision of two black holes holes – a tremendously powerful event detected for the first time ever by the Laser Interferometer Gravitational-Wave Observatory, or LIGO – is seen in this still from a computer simulation. Photo by the Simulating eXtreme Spacetimes Project.

A science?

But what is then the scientific value of computational experiments? Can they be used to assert that computing is a scientific discipline on its own? The natural sciences have a codified investigation method: a problem is identified; a predictable and testable hypothesis is formulated; a study to test the hypothesis is devised; analyses are performed and results of the test are evaluated; on their basis, the hypothesis and the tests are modified and repeated; finally, a theory that answers positively or negatively to the hypothesis is formulated. One important consideration is therefore the applicability of the so-called hypothetical-deductive method to CS. This, in turn, hides several smaller issues.

The first concerns the qualification of which ‘computational problems’ would fit such method. Intuitively, when one refers to the use of computational techniques to address some scientific problem, the latter can come from a variety of backgrounds. We might be interested in computing the value of some equations to test the stability of a bridge. Or we might be interested in knowing the best-fit curve for the increase of some disease, economic behaviour or demographic factor in a given social group. Or we might be interested in investigating a biological entity. These cases highlight the old role of computing as a technique to facilitate and speed-up the process of extracting data and possibly suggest correlations within a well-specified scientific context: computational physics, chemistry, econometrics, biology.

An essential characteristic of scientific experiments is their repeatability.

But besides the understanding of ‘computational experiment’ as the computational study of a non-computational phenomenon, the computational sciences themselves offer problems that can be addressed computationally: how stable is your internet connection? How safe is your installation process when external libraries are required? How consistent are the data extracted from some sample? Just to outline some. These problems (or their formal models) are investigated through computational experiments, but they seem to be less easily identified with scientific problems.

The second: how to formulate a good hypothesis for a computational experiment? Scientific hypotheses depend on the system of reference and, in the case of their translation to a computational setting, we have to be careful that the relevant properties of the system under observation are preserved. An additional complication is presented when the observation itself concerns a computational system, which might include a formal system, a piece of software, or implemented artefacts. Each of the levels of abstraction pertaining to computing reveals a specific understanding of the system, and they can all be taken as essential in the definition of a computing system. Is then a hypothesis on such systems admissible if formulated at only one such level of abstraction e.g. considering a piece of code but not its running instances? And is such an hypothesis still well-formulated enough if it tries instead to account for all the different aspects that a computational system present?

Finally, an essential characteristic of scientific experiments is their repeatability. In computing, this criterion can be understood and interpreted differently: should an experiment be repeatable under exactly the same circumstances for exactly the same computational system? Should it be repeatable for a whole class of systems of the same type? How do we characterize typability in the case of software? and how in the case of hardware?

Irregularities

All the above questions underpin our understanding of what a computational experiment is. Although we are used to expecting some scientific uniformity in the notion of experiment, the case of CS evades such strict criteria. First of all, several sub-disciplines categorise experiments in very specific ways, each not easily applicable by the research group next-door: testing a piece of software for requirements satisfaction is essentially very different from testing a robotic arm for identifying its own positioning.

Experiments in the computational domain do not offer the same regularities that can be observed in the physical, biological and even social sciences. The notion of experiments is often confounded with the more basic and domain-related activity of performing tests. For example, model-based testing is a well-defined formal and theoretical method that differs from computer-simulations in both admissible techniques, recognised methodology, assumptions and verifiability of results. Accordingly, the process of checking an hypothesis that characterises the scientific method described above is often intended simply as testing or checking some functionality of the system at hand, while in other cases it implies a much stronger theoretical meaning. Here the notion of repeatability (of an experiment) merges with the replicability (of an artefact) – a distinction that has already appeared in the literature (Drummond).

Finally, benchmarking is understood as an objective performance evaluation of computer systems under controlled conditions: is it in some sense characterising the quality of computational experiments, or simply identifying the computational artefacts that can be validly subject to experimental practices?

A philosophical analysis

The philosophical analysis on the methodological aspects of CS, of which the above is an example, is a growing research area. The set of research questions that need to be approached is large and diversified. Among these, the analysis on the role of computational experiments in the sciences is not a new one, though less understood is the methodological role of computer simulations in CS, rather than as a support method for testing hypotheses in other sciences.

The Department of Computer Science at Middlesex University is leading both research and teaching activities in this area, in collaboration with several European partners, including the Dipartimento di Elettronica, Informazione e Bioingeneria at Politecnico di Milano in Italy, which offers similar activities and has a partnership with Middlesex through the Erasmus+ network.

In an intense one-week visit, we drafted initial research questions and planned future activities. The following questions represent a starting point for our analysis:

  • Do experiments on computational artefacts (e.g. a simulation of a piece of software) differ in any significant way from experiments performed on engineering artefacts (like a bridge), social (a migration) or physical phenomena (fluid dynamics)?
  • Does the nature of computational artefacts influence the definition of a computational experiment? Or in other words, is running an experiment on a computer significantly different than running it in a possibly smaller-scale but real-world scenario?
  • Does the way in which a computational experiment is implemented influence the validity and generality of its results? In which way does the coding, its language and choice of algorithms affect the results?

These questions require considering the different types of computer simulations, as well as other types of computational experiments, along with the specificities of the problems treated. For example, an agent-based simulation of a messaging system underlies problems and offers results that are inherently different from the testing with real users of a monitoring systems for privacy on social networks. The philosophical analysis on the methodological aspects of CS impacts not only the discussion about the discipline, but also on how its disciplinary status is acknowledged by a larger audience.

Nowadays we are getting used to reading about the role of computational experiments in scientific research and how computer-based results affect the progress of science. It is about time that we become clear about their underlying methodology, so that we might say with some degree of confidence what their real meaning is.

Categories
Science & technology

The shape of things to come

Mariachiara Di Cesare Middlesex UniversityMariachiara Di Cesare is a Senior Lecturer in Public Health in the Department of Natural Sciences at Middlesex University. She was co-author of a new study published today in the Lancet showing that we are far from halting the global obesity epidemic.

When I started working as part of a global consortium investigating non-communicable disease risk factors, I already knew obesity levels were dangerously high, but the scenario revealed in our new paper was beyond my worst prediction.

The paper, ‘Trends in adult body-mass index (BMI) in 200 countries from 1975 to 2014: a pooled analysis of 1698 population-based measurement studies with 19.2 million participants‘, has been possible thanks to the effort of over 700 health scientists around the world, who have enthusiastically embarked on the challenge of producing rigorous estimations for every country over the past 39 years.

Tackling the obesity epidemic is increasingly a priority for national and local governments and in autumn last year the UK’s Chief Medical Officer argued for including obesity in the national risk planning. Is the alarmism real?

Photo by Tony Alter (Creative Commons 2.0)
Photo by We Love Costa Rica (Creative Commons 2.0)

Serious risks to health

Obesity alone has been estimated to have caused almost 4.5 million deaths worldwide in 2013 (8.1 per cent of the total) and it is directly associated with cardiovascular diseases, multiple cancers (oesophagus, colon and rectum, among others), and diabetes. Moreover, obesity ranks first among women and second among men as a risk factor associated with disability adjusted life-years (the lost years of healthy life i.e. because of living with a disease or because of premature death). Apart from these serious risks to health, recent estimations suggest an overall economic cost of roughly $2 trillion globally.

The Lancet study provides an overall picture of more than 640 million adult people obese (body mass index equal to or greater than 30 kg/m2) in 2014, corresponding to 14 adults out every 100. This is just a global average and unfortunately the scenario is worse in many individual countries.

A global trend

The first ten countries with the highest prevalence of obese adults, both men and women, are part of Oceania, with the prevalence of obesity reaching 49 per cent among men in French Polynesia and the Cook Islands and over 57 per cent among women in American Samoa and the Cook Islands. At the opposite end of the rankings are Timor-Leste and Japan, with a prevalence of obese women below three per cent, and Burundi and Timor-Leste with prevalences below one per cent for men. The interesting fact is that nowhere in the world between 1975 and 2014 did we observe a halt in the prevalence of obesity: the trend is positive (or should we say “negative”?) everywhere.

Tackling this issue is extremely complex but action is required from both individuals and governments.

The size of the population is not a minor component here. For example, in China and the USA, 46 million women and 42-43 million men are obese; however the prevalence of obesity among women and men is “only” 8.2 per cent and 7.4 per cent in the former and 33.6 per cent and 34.9 per cent in the latter.

Conversely, alongside the obesity debate, the research reveals that some populations are still malnourished. While the decline in the prevalence of people who are underweight (body mass index below 18.5 kg/m2) is decreasing, rates are still high in some south Asian countries like Bangladesh and India, with an underweight prevalence of 26.5 per cent and 24.9 per cent among women and 24.5 per cent and 23.1 per cent among men.

Tackling the problem

The good news is that we can reverse obesity trends with concerted action. We need to push for better, easier and cheaper access to healthy food; reduce consumption of unhealthy food (e.g. sugar) through taxation; support and incentivise physical activity; and invest in health education from primary school age. And we need to seriously challenge lifestyle choices of diet and leading sedentary lives. Tackling this issue is extremely complex but action is required from both individuals and governments.

Categories
Science & technology

World-class Coaching Podcast featuring Chris Bishop

Categories
Science & technology

Grown-up conversations

Director of Forensic Psychological Services at Middlesex, Dr Joanna R. Adler, and her colleague, Deputy Director Dr Miranda Horvath, were part of an expert panel led by Dr Victoria Nash of the Oxford Internet Institute to report on ‘Identifying the Routes by which Children View Pornography Online: Implications for Future Policy-makers Seeking to Limit Viewing‘ for the Department for Culture, Media and Sport.

The Conservative government has launched the latest salvo in its manifesto pledge to prevent children from accessing pornography online, proposing that pornography websites would have to require age verification – for example a credit card check or some form of electronic identity backed by official ID.

A public consultation from the Department for Culture Media and Sport is asking for responses to these proposals, drawn from our expert panel’s report on how children access pornography online. As part of the same pledge, the government introduced age-rating for music videos online, implemented by YouTube and Vevo, and since 2014 internet service providers, ISPs, have been expected to prompt their customers to decide whether or not they want to have family filtering applied to their internet connection.

Putting aside debates about whether pornography is harmful, or whether the chances of children viewing pornography online are sufficient to warrant major legislation, we do know that in study after study lots of under-18s do report seeing sexual content online or on their phones. It’s hard to determine precise numbers, or whether the content viewed is pornography or more mainstream content (think Game of Thrones nude scenes, or a Rihanna video), but there is evidence that they’re upset by what they see.

Clearly age, content and intent matter a great deal here. There’s a world of difference between a nine-year-old accidentally stumbling on an explicit video, and a 15-year-old seeking out content that helps them understand their sexual feelings or identity. As might be expected, many under-18s tell researchers they have seen sexual content accidentally rather than from seeking it out. Studies of older teens and those in their twenties reveal that they are often shown porn by others – perhaps for laughs, perhaps to shock, or perhaps as part of a relationship. Not all sharing is well-intentioned, and there are gender differences in how such experiences are interpreted.

Other recent studies in Britain, for example a 2012 NSPCC-commissioned report, reveal the extent to which teenage girls in particular can feel threatened by “technology-mediated sexual pressure from their peers”.

Andrew_Writer (Creative Commons 2.0)
Photo by Andrew_Writer (Creative Commons 2.0)

More than just a technical challenge

It’s worth noting the sheer range of routes through which pornography is accessible. The Net Children Go Mobile Study 2014 reported that children aged between nine and 16 have seen sexual images most commonly in magazines, television and films (which may or may not be streamed via the internet), as well as on video and photo sharing apps or websites. Others included pop-up ads, social networks and through instant messaging.

It’s quite simply impossible to shut down all of these routes. As John Gilmore, one of the internet’s most famous civil libertarians once put it: “The internet interprets censorship as damage and routes around it.” Just as data packets crossing the internet will find a way around network obstacles, people will copy, re-post and share content to bypass restrictions.

While measures such as family-friendly internet filtering are an important, if imperfect, tools for parents, we mustn’t forget that most pornography (apart from the most extreme forms) is legal for adults to view in the UK. An outright ban or blocking at the ISP level would be significant censorship. So, there’s a technical challenge in allowing adults access to pornography while keeping it away from children, but it presents a challenge to society too.

In terms of controlling the market, it’s currently illegal for companies based in the UK to sell or distribute pornography to anyone under 18, and pornographic material rated R18 (generally films) can only be sold through licensed sex shops. But applying this policy to the internet is difficult. There are many means of online access, and age-verification systems (such as using credit card details or checking against online databases) are not always used by websites, often because of their costs (and because they are not required) or because they may deter customers who can get the same content without checks elsewhere.

Sex education

Jurisdiction also matters. Analysis by The Economist suggests that there were 700 million to 800 million pages of porn online, three-fifths of which were hosted in the US. The most obvious sources are the major “tube” sites that offer free content, often directing users towards paid-for sites with which they maintain a symbiotic relationship. But it’s so easy to create, copy and exchange content that pornographic material can be easily found and downloaded using BitTorrent software, or even through social networking – not all of which forbid explicit material.

As demonstrated by the Facebook groups that were recently found to contain child abuse images, policing huge private networks for illegal material is fraught with difficulty. Formulating a way of managing access to material across the internet (or at least the web) when it is legal for adults is harder still. Requiring all commercial pornography providers whose content is served in the UK to implement age verification is a big ask – early indications are that some are on board already – but it’s a really important first step.

The lack of a perfect technical or market-level fix makes the challenge for society that much harder. As a nation, British people are not great at having sensible conversations about sex. A cultural history of Carry On films and tittering at pantomimes is accompanied by a state education system where there still isn’t statutory sex and relationship education in all secondary schools. Given that it’s practically impossible to ensure children don’t encounter pornography, surely it’s time we spent more time talking about this – at home, in schools, and as a society in general?

Pornography is fiction: a media product, not an objective depiction of real-life relationships, yet it may be the source of our children’s sexual education, with expectations adjusted accordingly. It’s also part of a wider, increasingly sexualised culture in which mainstream films, television, music videos and video-games can contain graphic and even violent sexual scenes. This should be the start, not the end, of the conversation.

This article was originally published on The Conversation. Read the original article.

Categories
Science & technology

Urban gulls: researching ‘public enemy #1’

Professor Tom DickinsA collaboration between Middlesex University and the University of the West of England is hoping to find new ways to help urban gulls live in harmony with humans. In this post, Professor of Behavioural Science Dr Tom Dickins talks gull science.

An open bag of chips at the seaside has always been fair game for a passing gull. I remember this from holidays in Devon and Cornwall over 30 years ago. It was considered part of the experience – we were getting closer to nature, but in recent years, we humans appear to have become less tolerant of gulls. Last summer in particular, the press reported a small number of negative incidents in lurid detail. For example, The Mirror (23 July 2015) published an article entitled ‘Seagull menace: Truth about Britain’s new Public Enemy No1 following spate of attacks. This story listed raids on picnics, aggressive defence of chicks and also attacks on pets. It then presented a sequence of photographs of a lesser black-backed gull (Larus fuscus) eating a starling (Sturnus vulgaris), above the caption ‘Bird murder’. Clearly The Mirror had taken a negative view of these events, in spite of pointing out the protected status of gulls and claims from conservation groups that these birds were behaving normally – this is really a problem of two populations trying to live alongside one another.

During my lifetime the number of urban gulls has increased, while coastal populations are generally in decline. Overall, gulls are under threat and are of conservation concern. The Guardian (6 June 2010) carried a conservation-oriented story earlier in this growing controversy, pointing out the RSPB’s claims that any negative intervention, such as culling urban breeding gulls would challenge their status even further. At various points the UK government has promised to look into this, and under the last administration allocated £250,000 of DEFRA funding for research into managing urban gull populations. This money was promptly cut after the last election and there are currently no plans to support research.

A Great Black-backed Gull predating a Kittiwake nest (Photo: Tom Dickins, 2015)
A great black-backed gull predating a kittiwake nest (Photo: Tom Dickins, 2015)

What needs researching? 

The basic argument is that coastal populations are under resource stress, quite possibly induced by climate change. For example, gulls are surface feeders, diving to about one metre below the surface of the sea to catch fish. These fish are in this one-metre zone as it is the right temperature for them, but with rising sea temperatures this layer is increasing beyond the depth that gulls can reach. There is also evidence that changes in the North Atlantic Oscillation are shifting fish stocks and impacting upon breeding success in other ways. Stressed populations will disperse if they can, and find other resource, and gulls are not only good at fishing but also scavenging. Most gull species are excellent generalists as the predation of a starling indicates. From my own observations of large gulls they not only eat fish, but also neighbours’ chicks and other birds. Indeed, great black-backed gulls (Larus marinus) regularly predate kittiwake (Rissa tridactyla) colonies and other marine birds as a staple part of their diet during the breeding season. This is nature – red in tooth and claw. Gulls need to convert whatever calories they can find into chicks.

The gulls I watch in the wild are not randomly distributed about the coast. They are expert foragers and make sound decisions about where to position their breeding sites and where to hunt. Great black-backed gull pairs breed in isolation or small clusters and appear to nest very close to kittiwake colonies at my field site. This enables a regular mid-morning and mid-afternoon visit to take eggs and later chicks from particular zones in the site. The great black-backed gulls also take other cliff nesting and coastal birds in flight, either at take off or landing, to supplement their diet. There is some evidence that these predated populations choose to be close to a great black-backed gull nest as this apex predator actually deters other gull species and thereby reduces predation costs.

Ground nesting gulls, such as great black-backed gulls, but also herring gulls (Larus argentatus) and lesser black-backed gulls, are not only threatened by neighbours looking at their chicks as a resource, but also by ground predators such as foxes and rats. Needless to say, gulls will defend their interests in all of these situations and we have been researching this in wild populations recently.

So, gulls need to have access to food resource but also to defensible nest sites. Urban planners have not thought about these issues when designing towns and cities, but they have organised urban spaces into areas with higher proportions of food outlets, outdoor seating, green spaces and tall buildings. Buildings, especially with flat roofs, are like islands isolated from the ebb and flow of people below and also from many ground predators. Moreover, building managers usually take great pains to control rodents so even rats are few and far between at this height, and heat will leak from these buildings providing a fairly stable thermal environment for breeding.

Studying urban gulls in the City of Bath

In collaboration with Dr Chris Pawson (University of the West of England) and Bath and North East Somerset Council, we are embarking on a project in the city of Bath to map the behaviours of the large urban gull population that is resident there. The project will involve directed field work – accessing roofs, mapping breeding sites, and counting eggs and chicks, and observing behaviour – but it will also involve the citizens of Bath reporting on incidents involving gulls through a dedicated website.

Through all of this data we hope to gain a sense of the gulls’ decisions: Where are they most likely to breed and nest? Where do they find it easiest to forage? How does this change across the year, across the day? In doing this we hope to see Bath as a gull does – a true bird’s eye view. And then we can use this information to suggest benign interventions to benefit the gulls and humans that live in the city.

It is likely that some of the advice we produce will be common sense; for example, the nature of food waste disposal in retail areas of the city may need better protection and different organisation. The behaviour of citizens around gulls may also need some direction. But we may also be able to think of how to design gull neighbourhoods, which remove these birds from direct contact with people while allowing them to thrive. We may also be able to change the nature of the interaction from one of antagonism to one of celebration, as some have done in the North East.

Categories
Science & technology

Bowie lands on wall where a Banksy was

Dr Susan Hansen

Dr Susan Hansen is Senior Lecturer in Psychology and Chair of the Forensic Psychology Research Group at Middlesex University. Following the recent appearance of a David Bowie painting on a wall in North London, she discusses her study of graffiti and street art’s existence within a field of social interaction.

Last Saturday, a painting depicting the late David Bowie – in iconic 1970s costume from his Aladdin Sane tour – appeared on a wall in North London. The intricately stenciled piece, entitled ‘I would be your slave’ is by London-based Chicago-born street artist Pegasus, who has a reputation for delivering timely commemorative works. A small stenciled work featuring the late Cilla Black is on a wall less than a mile from Bowie and Pegasus also stenciled the late Amy Winehouse running alongside Camden Lock – a painting that was first whitewashed over, then repainted and ‘opened’ by Winehouse’s mother.

The North London wall in question (on the side of a Poundland discount store on Whymark Avenue in Turnpike Lane) has long been popular with street artists and graffiti writers, and was the original site of Banksy’s (2012) ‘Slave Labour’, which was famously cut off the wall without notice for private auction in February, 2013. At the time, community protests at the ‘theft’ of this work were initially successful and ‘Slave Labour’ was withdrawn from auction in Miami, only to surface in London several months later, where it was sold for £750,000.

Whymark Avenue, London. January 2016 Photograph © Susan Hansen
Whymark Avenue, London, January 2016 © Susan Hansen

The removal of street art from community walls for private auction is a morally problematic yet currently legal action. My research examines community reactions to the removal of street art for private auction. I’ve found that a new set of urban moral codes is being used to position street art as a valuable community asset worth preserving, rather than as an index of crime and social decay that should be painted over.

Whymark Avenue, London. May 2014 – April 2015. Photographs © Susan Hansen
Whymark Avenue, London, May 2014–April 2015 © Susan Hansen

While graffiti is often seen as a sign of urban degeneration and social problems, street art is commonly viewed as an index of urban regeneration and gentrification. Islington Council (2014) warns that, “graffiti can be the catalyst for a downward spiral of neglect … and encourage other more serious criminal activity”. Even the Turnpike Art Group, the community art organisation responsible for facilitating Pegasus’ timely tribute, claimed that the wall had “slipped into urban decay” and was a space that had, post the removal of Banksy’s work, “lost its soul”.

Whymark Avenue, London. August 2015 Photographs © Veronica Bailey.
Whymark Avenue, London, August 2015 © Veronica Bailey

Such aesthetic socio-moral judgments are based on long-held associations between graffiti and criminal activity, as a visible index of social deprivation and urban decay, and as a form of abjection and territory marking akin to public urination, as dirt or filth, or matter out of place. This discourse of disorder is grounded in graffiti’s transgression of the authorities’ more regulated visions of the city. As such, street art and graffiti offer a visible challenge to our notions of public and private space, and to the rights of property owners and other agents to alter our shared urban environment.

I have been documenting the North London wall now adorned by Bowie since August 2012. I use repeat photography to study street art and graffiti as visual dialogue. By capturing both recognisably ‘artistic’ street art, and visually ‘offensive’ graffiti tags, I aim to study graffiti and street art’s existence within a field of social interaction – as a form of conversation on urban walls that are constantly changing. This approach departs from other research methods in that it is not concerned with the analysis of decontextualised individual photographs of street art or graffiti – of the kind commonly found in glossy coffee table books and street art websites. The visual method I call ‘longitudinal photo-documentation’ – while incredibly time-consuming – is powerful in that the visual data it yields can make the ongoing dialogue among artists, writers and community members uniquely visible to researchers.

For more on the installation of ‘I would be your slave’ visit: http://www.turnpikeartgroup.co.uk/2016/01/i-would-be-your-slave-london-2016.html

For more on Susan’s research see: https://www.researchgate.net/profile/Susan_Hansen2

Hansen, S. (2015) “Pleasure stolen from the poor”: Community discourse on the ‘theft’ of a Banksy. Crime, Media & Culture. DOI: 10.1177/1741659015612880

Hansen, S. & Flynn, D. (2015) ‘This is not a Banksy!’: street art as aesthetic protest. Continuum: Journal of Media & Culture. DOI:10.1080/10304312.2015.1073685

Hansen, S. & Flynn, D. (2015) Longitudinal Photo-documentation: Recording Living Walls. Journal of Urban Creativity & Street Art, 1(1): 26-31.

Categories
Science & technology

The future of driving is here

Glenford-MappMiddlesex University’s Department of Computer Science has established the UK’s first academic VANET Research Testbed. Associate Professor Dr Glenford Mapp discusses how Vehicular Ad-Hoc Networks are set to revolutionise our transport systems.

Imagine you are driving home after a long day in the dark and fog. The driver in the vehicle two cars ahead suddenly slams on the brakes. However, instead of waiting for you to notice, your car immediately tells you what has occurred so you react in time.

Imagine you are driving down a busy London street. An accident occurs two streets away, but your car already knows this and reroutes your journey at the next junction to avoid the traffic.

Imagine you are approaching a traffic light signalling red to stop. As you near it, your car tells you the light will change to green in two seconds so you don’t need to stop.

Vehicular Ad-Hoc Network

These scenarios may sound far-fetched, but the technology to do these things is currently being developed and deployed. Known as a Vehicular Ad-Hoc Network (VANET), this technology could be about to bring about the next information revolution – allowing us to have connected driverless cars, autonomous trains and always-locatable planes.

In the future all forms of transport will be connected and able to exchange information with each other to make our travel faster, safer and easier. These Intelligent Transport Systems (ITS) are fast becoming a major requirement in the development of the Smart Cities of the future.

Figure 1: Showing a VANET System
Figure 1: Showing a VANET System

By now you’re probably asking how this is all possible. The answer is pretty simple.

VANETs allow us to integrate our transport and communication infrastructures through communication devices deployed along the road as shown in Figure 1.  These units are called Roadside Units (RSUs). The RSUs talk to a device in your car called an Onboard Unit (OBU).

OBUs can exchange information with RSUs as well as with each other, and because VANETs have been engineered to deliver information quickly and reliably they can be used in a number of safety-critical areas such as collision avoidance, accident notification and disaster management.

Zugo electric car concept (Creative commons: Milos Paripovic)
Zugo electric car concept (Creative Commons 2.0: Milos Paripovic – flickr.com/photos/milosparipovic)

VANETs will also allow us to deliver infotainment applications such as video, live news, games and so on directly to your car. In fact, new systems are being developed that will allow applications to migrate seamlessly from your mobile phone to your car and back again as you move around.

The Middlesex University VANET Research Testbed

At Middlesex University we are seeking to be at the centre of this revolution by creating the Middlesex VANET Research Testbed. The first Academic VANET Research Testbed in the UK, our Testbed is located at our Hendon Campus in London and has four Roadside Units mounted on various buildings, as shown in Figure 2.

Figure 2: The Middlesex VANET Research Testbed at Hendon Campus, London
Figure 2: The Middlesex VANET Research Testbed at the London Campus

The Middlesex VANET Trial

Our work on VANETs recently featured in the IEEE Communications Magazine on future communication systems. We are currently conducting a trial to test our VANET Research Network and want to involve as many people as we can.

If you are happy to get involved, we will put an OBU in your vehicle for at least one day and ask you to drive around Hendon at specific times to get an idea of the traffic in the area around the Hendon Campus. We also have OBUs for cyclists and pedestrians.

Middlesex University has also been awarded a Transportation Research Innovation Grant by the Department for Transport (DfT) to extend the VANET Research Testbed to Watford Way (A41) behind our Hendon Campus (see Figure 3). We are currently working with Transport for London (TfL) and DfT to make this happen.

Figure 3: The Extended Middlesex Testbed including A41 (Watford Way)
Figure 3: The Extended Middlesex Testbed including A41 (Watford Way)

But it’s not all about transport. VANETs will help us build better communication networks that are resilient, more adaptable and will allow people to communicate based on the new 4As paradigm: Anytime, Anywhere, Anyhow and Anything.

VANETs are here, and they are here to stay.

If you would like to take part in our MDX VANET Trial, please get in touch with Arindam Ghosh.

If you want to find out more, please visit our VANET Research webpage at www.vanet.mdx.ac.uk

Categories
Science & technology

Of men and machines (doing mathematics)

Giuseppe Primiero, Middlesex UniversityDr Giuseppe Primiero is Senior Lecturer in Computing Science and a member of the Foundations of Computing research group at Middlesex University. Here Giuseppe discusses the recent British Colloquium for Theoretical Computer Science, which he helped organise.

How many of the functionalities located today on your standard desktop or mobile computer were first the object of theoretical study by mathematicians and computer scientists? And which of today’s theoretical results will be essential to tomorrow’s computing technologies?

Often this theoretical work is at such a high level of abstraction that you would hardly recognise the relevance to the final working application, but essential it is.

By its own nature, Computer Science is a field of research that uniquely combines theory and applications, in a way no other scientific field really does. Computers are, after all, physical artefacts ruled by logical principles – a marriage of mathematics and technology born out of many distinct theoretical and practical results.

Notable among other figures is the role of the British mathematician Alan Mathison Turing. His theoretical device first proved the logical possibility of a general-purpose machine during a time when a ‘computer’ was principally a human doing computations. Only many years later did the word became widely and uniquely used for mechanical calculators.

Today research in Computer Science is still bred of the theoretical results that at first are motivated by computational problems. In this first, traditional sense, mathematics is at the core of computation and this theoretical research may take long routes before the results become essential to technologies of interest and profit for all mankind.

On the other hand though (and several decades after the birth of the first calculating machines), today’s research in mathematics and many other sciences is also led by machines working alongside their human counterparts. Their computational power, speed and large (although finite) memory are increasingly of aid when performing otherwise impossible calculations.

Hence, the progress of theoretical research (in particular in the mathematical field) relies heavily on the physical computations performed by machines. In this second and entirely novel sense, mathematics itself becomes a product of computation, and the way this inversed relation will affect the nature and principles of our knowledge and technology in the future is still to be discovered.

Alan Turing (image by parameter_bond, Creative Commons 2.0)
Alan Turing (image by parameter_bond, Creative Commons 2.0)

British Colloquium for Theoretical Computer Science

As a result of this, research in theoretical computer science is becoming more and more of direct interest and quick relevance to other fields outside of academia. It was with this in mind that the Foundations of Computing Group at Middlesex recently hosted the 31st edition of the British Colloquium for Theoretical Computer Science (BCTCS).

This meeting traditionally welcomes PhD students from across the country to present their work alongside talks from internationally renowned researchers – offering an overview of the most relevant trends in the research area.

This year, the remarkable list of invited speakers boasted two Turing Award winners and one Fields Medalist (roughly speaking these are like the Nobel Prizes in Informatics and Mathematics, respectively).

Sir Tony Hoare FRS (Microsoft Research, University of Cambridge) opened the colloquium discussing the interaction between concurrent and sequential processes.

Tony is well known for developing the formal language CSP (Communicating Sequential Processes) to specify the interactions of concurrent processes, which enables programmers to make machines execute processes in overlapping time periods (i.e. concurrently) as opposed to one after another (i.e. sequentially).

This has led to crucial improvements in speeding up computing technologies, and represents some of the essential mechanisms hidden in today’s computing technology which came about through theoretical investigations.

Per Martin-Löf (Stockholm University) opened proceedings on the second day with a talk on the mathematical structures underlying repetitive patterns and functional causal models. This very abstract talk was not surprising from a logician best-known for his type theory – a mathematical result that has been at the very basis of computer programs used today to perform proofs and obtain logical deduction in an automated way.

In the afternoon Samson Abramsky FRS (University of Oxford) offered his fascinating views on contextuality, a key feature of quantum mechanics that permits quantum information processing and computation to transcend the boundaries of classical computation. This could possibly be the next stage of our interaction with machines.

On the third day we welcomed, from Pittsburgh USA, Thomas Hales. Tom is best known for proving the Keppler Conjecture about the close packing of spheres.

Imagine a grocer selling oranges who wants to stack as many as possible in a small place. Most people naturally seek to build an arrangement known as face-centered cubic and the famous 17th century astronomer Johannes Keppler conjectured that this arrangement, filling space at a density around 74%, could not be beaten.

Remarkably it took more than 370 years before a proof appeared. The most trusted proof was given by Tom, but this had 250 pages and involved three gigabytes of computer programs, data and results. Reviewers of his work noted they were 99% certain of its accuracy but incredibly Tom was not satisfied with this and spent a decade engaging an automated theorem prover (running on a computer) to verify all parts of his proof. The climax of this tour de force was the announcement in 2015, by Tom and his many collaborators, of a formal proof for the Keppler Conjecture.

The theorem-proving theme continued in the afternoon when renowned mathematician Sir Tim Gowers FRS (University of Cambridge) spoke on his ideas of an extreme human-oriented, heuristic-based automated theorem prover that would be of use to everyday mathematicians. Tim spoke about his programme to remedy a lack of engagement between most working mathematicians and the automated theorem proving community.

The final day welcomed another star of Computer Science as Joseph Sifakis (University of Lausanne) spoke about rigorous system design. This talk was of interest to theorists and the more application-oriented alike, before the colloquium finished with Andrei Krokhin (Durham University) discussing recent research in the value constraint satisfaction problem.

Theoretical Computer Science is today experiencing an exciting phase in research methodology: the collaboration of men and machines is offering unforeseen possibilities, as well as problems. Theory and technology are becoming less distant and the effects of this collaboration are being seen stronger and faster in everyday life.