Categories
colloquium

Middlesex University Computer Science Colloquium

 

Middlesex Computer Science Colloquium is the monthly departmental computer science colloquium of Middlesex University, London.

It is organised by Can Baskent.

Categories
colloquium event

Colloquium: When Humans and Computers Come Together: A New or Resurged Old Research Paradigm?

2 February 2021. Tuesday. 3PM. London time. On Zoom.

Title When Humans and Computers Come Together: A New or Resurged Old Research Paradigm?

Speakers Shujin Li, School of Computing, University of Kent, UK.

Abstract  In this talk the speaker will talk about his personal observation and thoughts on a range of newly emerged research topics and concepts in computer science and other related disciplines, which are all around a key concept of putting humans and computers together. He will look at historical roots of such concepts in computer science and application backgrounds of such topics and concepts, and makes his attempt of connecting many of them. The talk will touch a number of fundamental concepts in computer science such as the term “computing” itself, and crosscuts a number of important areas in computer science including AI, HCI, cyber security, and information visualisation. It will also go beyond computer science to look at closely related disciplines such as cognitive and behavioural sciences, modelling and simulation, sociology, and engineering.

Bio Shujun Li is Professor of Cyber Security at the School of Computing, University of Kent in the UK. He is also Director of the Institute of Advanced Studies in Cyber Security and Conflict (SoCyETAL) and the Kent Interdisciplinary Research Centre in Cyber Security (KirCCS), which represent the University of Kent as one of 19 UK government recognised Academic Centres of Excellence in Cyber Security Research (ACEs-CSR). His research interests are mostly around interdisciplinary topics covering cyber security, human factors, digital forensics and cybercrime, multimedia computing, chaotic systems in digital domain and their practical applications. He is currently leading two inter-disciplinary and inter-institutional research projects on human-centric approaches to cyber security and privacy. He has published over 100 research papers at international journals and conferences, and received two Best Paper Awards. In 2012 he received an ISO/IEC Certificate of Appreciation, for being the lead editor of ISO/IEC 23001-4:2011, the 2nd edition of the MPEG RVC (Reconfigurable Video Coding) standard. He is currently on the editorial boards of a number of international journals, and has been on the organising or technical program committees of over 100 international conferences and workshops. He is a Fellow of BCS, a Senior Member of IEEE, and a Member of ACM. He is a Vice President and Founding Co-Director of the ABCP (Association of British Chinese Professors). More about his research and professional activities can be found at his personal website http://www.hooklee.com/.

Zoom Link https://mdx-ac-uk.zoom.us/j/6684138396?pwd=d0I5V2JlTHVKbjlKWXZ2MW1RZ0ozQT09

Meeting ID: 668 413 8396

Passcode: mdx

Categories
colloquium event

Colloquium: “Cell Assembly-based Task Analysis”

5 January 2021. Tuesday. 3PM. London time. On Zoom.

Title Cell Assembly-based Task Analysis (CAbTA)

Speakers Dan Diaper, DDD Systems & Chris Huyck, Department of Computer Science, Middlesex University, London

Abstract  Based on an Artificial Neural Network model, Cell Assembly-based Task Analysis is a new method that outputs a task performance model composed of integrated mind-brain Cell Assemblies, which are currently believed to be the most plausible, general organisation of the brain and how it supports mental operations.  A simplified model of Cell Assemblies and their cognitive architecture is described and then used in the method.  A brief sub-task is analysed.  The method’s utility to research in Artificial Intelligence, neuroscience and cognitive psychology is discussed and the possibility of a General Theory suggested.Some of the work presented here is further explained in an arXiv paper entitled The Task Analysis Cell Assembly Perspective.

Bios Dr. Dan Diaper was previously ‘Professor of Systems Science & Engineering’ and was the last  ‘Head of the Department of Computing’ at Bournemouth University.  After a quarter century spanning academic career which finished in 2006 after two years as a Senior Researcher at Middlesex University, he is now an independent researcher and consultant.  After completing his doctorate in the Department of Experimental Psychology, University of Cambridge in 1982, Dr. Diaper turned to research on applied cognitive ergonomics, Human-Computer Interaction (HCI), Computer Supported Cooperative Work (CSCW) and Artificial Intelligence (AI), and subsequently to research on software engineering and computer science. Dr. Diaper is one the world experts on task analysis and CSCW; he chaired the U.K. Government’s Department of Trade & Industry’s CSCW SIG in the early 1990s and for a dozen years was co-editor of Springer-Verlag’s CSCW book series, being responsible for the publication of over thirty specialist, technical books.  For a dozen years he was the General Editor of the Elsevier journal ‘People and Computers: The Interdisciplinary Journal of Human-Computer Interaction’.  Dr. Diaper has published about eighty journal/conference papers or book chapters and he has edited six academic books and four HCI conference proceedings.

Dr Christian R. Huyck is Middlesex University’s Professor in Artificial Intelligence and has been working in AI for over 30 years. He has over 100 publications in a range of journals and conferences.  His two main research tracks are Natural Language Processing (NLP), and neural nets using Cell Assemblies (CAs).  He heads Middlesex’s AI research group consisting of a dozen academic staff. Dr Huyck received his PhD from the University of Michigan in 1994. While there he worked on a range of AI projects, but concentrated on NLP.  In addition to CAs and NLP, Huyck has a range of AI related interests including ontologies and knowledge management. Huyck has concentrated on neural modelling research since arriving at Middlesex in 1998 proposing CAs as a good basis for a cognitive model.  This work has been based on point spiking neural models, and uses Hebbian learning.  He’s done categorisation, agents, conversation, rules, reinforcement learnng, cognitive models and natural language parsing, all with spiking neurons.

Zoom Link https://mdx-ac-uk.zoom.us/j/6684138396?pwd=d0I5V2JlTHVKbjlKWXZ2MW1RZ0ozQT09

Meeting ID: 668 413 8396

Passcode: mdx

Categories
colloquium event

Colloquium: “A Panel discussion on algorithms and society”

1 December 2020. Tuesday. 3pm. London Time. On Zoom.


Title Mapping the Public Debate on Ethical Concerns: Algorithms in Mainstream Media

Speaker Balbir S. Barn, Department of Computer Science, Middlesex University, London.

Abstract Algorithms are in the mainstream media news on an almost daily basis. Their context is invariably artificial intelligence (AI) and machine learning decision-making. In media articles, algorithms are described as powerful, autonomous actors that have a capability of producing actions that have consequences. Despite a tendency for deification, the prevailing critique of algorithms focuses on ethical concerns raised by decisions resulting from algorithmic processing.  This paper reports results from the first systematic mapping study of articles appearing in leading UK national papers from the perspective of widely accepted ethical concerns such as inscrutable evidence, misguided evidence, unfair outcomes and transformative effects.

Bio Balbir S. Barn, PhD, is professor of software engineering at Middlesex University. Balbir has 15 years of industrial research experience working in research centres at Texas Instruments and JP Morgan Chase as well as leading on academic funded research (Over £2.5 million). Balbir’s primary research area is focused on model driven software engineering. Currently, Balbir is conducting research on simulation environments for digital twins that focuses on study, evaluation and advances in value sensitive design and epistemic concerns in the use of simulation technology. Balbir has published over 125 peer-reviewed papers in leading international conferences and journals and includes a recently completed edited book on “Advanced Digital Architectures for Model-Driven Adaptive Enterprises”.

Title Mastering the Algorithm and Mentoring the Human Decision-Maker

Speaker Mandeep K. Dhami, Department of Psychology, Middlesex University, London.

Abstract I present some evidence for the reasons why algorithms should be preferred over unaided human judgment, and attempt to offer measured solutions to problems associated with so-called ‘algorithmic bias’. Although I will refer to the crime and justice domains, the issues I raise are applicable to other domains.

Bio Mandeep K. Dhami, PhD is Professor in Decision Psychology at Middlesex University, London. She previously held academic and research positions in the UK (University of Surrey and University of Cambridge), Canada (University of Victoria), USA (University of Maryland) and Germany (Max Planck Institute for Human Development). Mandeep has also worked as a Principal Scientist for the UK Ministry of Defence, and has work experience in two British prisons. Mandeep is an internationally recognized expert on human judgment and decision-making, risk perception, and uncertainty communication. She applies her expertise to solving problems in the criminal justice and intelligence analysis domains. Mandeep has authored 120 scholarly publications and is lead editor of a book entitled ‘Judgment and Decision Making as a Skill’ published by Cambridge University Press. To-date, she has obtained over £2 million in research funding and her research has received several international awards including from the European Association for Decision Making and the American Psychological Association (Division 9). Mandeep regularly advises Government bodies nationally and internationally on evidence-based policy and practice. Most recently, she was a UK representative on the NATO SAS-114 research panel on ‘Assessment and Communication of Risk and Uncertainty to Support Decision Making’ and was awarded the 2020 NATO SAS Panel Excellence Award. Mandeep is currently co-Editor of Judgment and Decision Making, the official journal of both the (US-based) Society for Judgment and Decision Making and the European Association for Decision Making.

Title Regulating Machine Learning

Speaker Carlisle George, Department of Computer Science, Middlesex University, London.

Abstract The increasing applicability of AI and Machine Learning in all aspects of life has raised an important issue of how best to develop a governance framework to mitigate many concerns related to its use. Some of these concerns relate to accountability, liability, privacy/data protection and bias/discrimination. Many of these challenges stem from the lack of algorithmic transparency that is caused by constraints related to understandability (how ML models work before deployment), explainability (how ML models reach particular decisions after deployment) and autonomy (ML models can change, evolve and adapt through their deployment).  In this debate, I argue that given the concerns associated with the use of machine learning, it is inevitable that regulatory measures will be adopted in the future. However, how this regulation develops and by whom, is an issue that we need to carefully consider. As the effects of machine learning become the subject of litigation, case law will develop as judges make rulings on issues such as liability. Sector-specific regulators may need to develop requirements for development, testing and deployment of ML technologies. Professional Organisations and Civil society will also have roles to play. As academics and researchers, we can also play an integral role in shaping how regulation develops. What practical steps can we take to address transparency? Is it possible to develop ways of understanding ML models and explaining what they do? How do we address the training of ML algorithms so as not to replicate bias that is within the data itself? How can we develop methods to “appeal” against decisions of perceived “infallible” AI systems?

Bio Dr Carlisle George is an Associate Professor at Middlesex University. Among other qualifications, he holds a PhD in Computer Science (University of London) and a Master’s degree (LLM) in “IT and Communications Law” (LSE). He is qualified as a Barrister and was called to the Bar of England and Wales at Lincoln’s Inn, London and the Bar of the Eastern Caribbean Supreme Court (where he maintains his legal practice licence). Dr George has many years of experience working on legal/regulatory issues in EU projects as a senior legal expert/advisor including EAHC – study focusing on “An overview of the national laws on electronic health records in the EU Member States and their interaction with the provision of cross-border eHealth services”; Directorate E – study on Completeness and conformity of measures of Member States to transpose the PNR Directive; and Directorate C – study on the legal and political context for setting up a European Identity document. He has also been a legal researcher on EU funded projects including VALCRI (Visual Analytics for Sense-Making in Criminal Intelligence Analysis) and SAMi2 (Semantics Analysis Monitor for Illegal Use of the Internet). His main areas of focus are legal aspects of health-IT, data protection/privacy and legal aspects of e-commerce, digital forensics and data science. He has published many academic articles and co-edited two books on eHealth and medical informatics, organised six international Health IT workshops and continues to be research active with doctoral students. He is also the convenor of the ALERT (Aspects of Law and Ethics Relating to Technology) research group at Middlesex.

Title Human-Centred Algorithms – A Recap

Speaker B.L. William Wong, Professor of Human-Computer Interaction, Department of Computer Science, Middlesex University, London, and Professor-in-Residence, Genetec, Inc.

Abstract In this talk I will give a brief statement as a recap to provide continuity from last month’s opening CS Colloquium where we discussed some lessons from the VALCRI project about designing ethical safeguards for using AI / ML in criminal intelligence and investigative analysis. This will include (1) making ethical problems tractable, (2) designs should enable humans to ascertain or challenge if outcomes or recommendations of  AI / ML are sensible (3) the type of human-machine partnership will affect how the ethical problems manifest themselves. These problems were discussed in the context of human-centredness in relation to the algorithmic transparency framework for black-box algorithms and a cognitive engineering-based approach to designing for visibility and transparency.

Bio Professor Wong’s research is in cognitive engineering and the representation and interaction design of visual analytics user interfaces that enhance situation awareness, sense-making, reasoning, and decision making in dynamic environments. His current research focuses on designing for transparency in human-machine teams.  In September 2020, Dr Wong returned from a 2-year industrial sabbatical as Principal Scientist at Genetec Inc., where he and his team commercialised selected IP from the EU-funded VALCRI project. He has received over US$25.3 million in research grants and published over 120 scientific peer reviewed articles with his students and colleagues

Zoom link https://mdx-ac-uk.zoom.us/j/6684138396?pwd=d0I5V2JlTHVKbjlKWXZ2MW1RZ0ozQT09

Meeting ID: 668 413 8396

Passcode: mdx

Categories
colloquium event

Colloquium: “Human-Centred Algorithms”

3 November 2020. Tuesday. 3PM. London Time. On Zoom.

Title Human-Centred Algorithms

Speaker B.L. William Wong, Professor of Human-Computer Interaction, Middlesex University London, and Professor-in-Residence, Genetec, Inc.

Abstract In this talk I will discuss some of the ethical problems that researchers have come to observe about the use of black-box algorithms, how we have interpreted them to make them tractable, discuss the notion of human-centredness in relation to these black-box algorithms, and present our algorithmic transparency framework and a cognitive engineering-based approach to designing for visibility and transparency.

Bio Professor Wong ‘s research is in cognitive engineering and the representation and interaction design of user interfaces that enhance situation awareness, sense-making, analytic reasoning, and decision making in dynamic environments such as air traffic control, and emergency ambulance control. His current research focuses on designing for algorithmic transparency in human-machine teams in intelligence and investigative analysis settings. In September 2020, Dr Wong returned from a 2-year industrial sabbatical as Principal Scientist at Genetec Inc., where he and his team commercialised selected IP from the EU-funded VALCRI project. VALCRI was a 17-organisation R&D consortium which Wong led from 2014-2018, tasked to develop a next generation visual analytics and sense making system for criminal intelligence analysis and investigation. He has received over US$25.3 million in research grants and published over 120 scientific peer reviewed articles with his students and colleagues.

Zoom link https://mdx-ac-uk.zoom.us/j/6684138396?pwd=d0I5V2JlTHVKbjlKWXZ2MW1RZ0ozQT09

Meeting ID: 668 413 8396

Passcode: mdx