Waikato University crest

Department of
Computer Science
Tari Rorohiko

Computing and Mathematical Sciences

Recent Seminars

Events Index

Cinema Data Mining: The Smell of Fear

Jöerg Wicker
Institute for Computer Science, Johannes Gutenberg University Mainz
Tuesday 25 August 2015
While the physiological response of humans to emotional events or stimuli is well-investigated for many modalities (like EEG, skin resistance, ...), surprisingly little is known about the exhalation of so-called Volatile Organic Compounds (VOCs) at quite low concentrations in response to such stimuli. VOCs are molecules of relatively small mass that quickly evaporate or sublimate and can be detected in the air that surrounds us. The paper introduces a new field of application for data mining, where trace gas responses of people reacting on-line to films shown in cinemas (or movie theaters) are related to the semantic content of the films themselves. To do so, we measured the VOCs from a movie theater over a whole month in intervals of thirty seconds, and annotated the screened films by a controlled vocabulary compiled from multiple sources. To gain a better understanding of the data and to reveal unknown relationships, we have built prediction models for so-called forward prediction (the prediction of future VOCs from the past), backward prediction (the prediction of past scene labels from future VOCs), which is some form of abductive reasoning, and Granger causality. Experimental results show that some VOCs and some labels can be predicted with relatively low error, and that hints for causality with low p-values can be detected in the data. The data set is publicly available at: https://github.com/joergwicker/smelloffear.

Jöerg Wicker is a research associate at Data Mining Group of the Institute for Computer Science of the Johannes Gutenberg University Mainz. His main research interests are Machine Learning and Data Mining and its applications in Bioinformatics, Cheminformatics, and Computational Sustainability. He did his PhD in computer science at the Technical University of Munich and wrote his PhD thesis on Large Classifier Systems in Bio- and Cheminformatics.

 

Creating a security enabled and low power internet of things platform

Dan Collins
The University of Waikato
Tuesday 18 August 2015
Dan Collins is doing a COMP520 project, but will be overseas when the Honours Conference day is on. As a substitute, we've arranged for him to give a presentation in the regular department seminar slot. The abstract for his presentation is:

Deploying existing internet of things platforms into low power environments provides a significant challenge for developers. Wi-Fi, Bluetooth and Linux are unsuitable for battery powered installations where low maintainence and long battery life are key requirements. The IEEE 802.15.4 standard defines a way to create such a low power platform but neglects features like secure provisioning. Available IEEE 802.15.4 implementations impose network architecture restrictions on the developer which are unacceptable for many potential applications. This project addresses these concerns by creating a new platform specifically designed to be easily provisionable as well as low power.

 

Prototyping Mobile Experiences Fast and Effectively

David Mannl
Mobile UX Architect, Fairfax Media, Auckland
Tuesday 18 August 2015
Learn all about the tools, techniques and methodologies on how to rapidly turn an app idea into an interactive prototype that runs on an actual device. This session is targeted at designers and developers with interest in mobile apps. All other students are welcome to join as well. Do you have the next billion dollar app idea just waiting to be pitched to the world?

 

Deep Learning, Multimedia Mining and Medical Image Analysis

Christopher Pal
Polytechnique Montreal, University of Montreal, Canada
Tuesday 4 August 2015
The combination of big data sets and deep learning has sparked a revolution in speech recognition and computer vision. However, insights and developments have been rapidly propagating to other disciplines. In particular, many multimedia processing problems ranging from audio, image, video and language understanding to the analysis of specialized medical imagery have started to see the impact of recent developments in deep learning.

I'll begin this talk by briefly reviewing some of the initial successes that helped spark the recent wave of interest in deep learning. I'll go on to focus on some more recent advances in multimedia analysis due to the use of deep learning techniques. I'll show how we have used deep learning techniques and combined multiple types of models for different modalities of video to win a competitive challenge on emotion recognition in the wild. I'll also talk about some of our work on mining Wikipedia, Google image search results and YouTube & the role of semi-supervised learning. I'll touch upon our extremely recent work using deep learning methods to produce semantically appropriate and syntactically well formed phrases describing the visual content of video clips.

Obtaining medical data at large scale is much more challenging compared to general multimedia; however, the accurate analysis of medical imagery has the potential to directly affect human health outcomes. This talk will conclude with a discussion of how we have been using deep learning methods for medical image analysis. I'll talk about some of our older and more recent work on segmentation and the BRATS brain tumor segmentation challenge.

 

Document DNA: Distributed Content-Centered Provenance Data Tracking

Michael Rinck
Department of Computer Science, University of Waikato, Hamilton
Tuesday 21 July 2015
This seminar presents a new content-centered approach to provenance data tracking: the Document DNA. Knowledge workers are overwhelmed as they find it hard to structure, maintain, and find re-used content within their digital workspace. This issue is aggravated by the growing amount of digital data knowledge workers need to maintain. The talk introduces a concept for tracing the evolution of text-based content across documents in the digital work space, without the need for a centralized tracking system. Provenance data has been used for data security, databases and to track knowledge workers' interactions with digital content. However, very little research is available on the usefulness of provenance data for knowledge workers. Furthermore, current provenance data research is based on central systems and tracks provenance at the file level.

I conducted three user studies to explore current issues knowledge workers face when working with digital content. The first study examined current knowledge workers' problems when re-using digital content. The second study examined to what extend the issues detected in the first study are addressed by document management systems. I found that document management systems do not fully address these issues, and that not all knowledge workers make use of the document management system available to them. The third study examined reasons for low user saturation of available document management systems. As a result of these three studies I identified task categories and a variety of related issues.

Driven by these findings, I developed a conceptual model for Document DNA, which tracks the provenance of data used in the identified tasks. To show the effectiveness of my approach, I created a software prototype and conducted a realistic user study. In my final user study, participants executed example tasks gathered from real knowledge workers with and without the support of the software prototype. The results of this study confirm that the Document DNA successfully addresses the issues identified. The participants were significantly faster when performing the tasks using the software prototype; most participants using traditional methods failed to identify the provenance of the data, whereas the majority of participants using the software prototype succeeded.

 

The Challenge of Understanding Embodied Interaction with Information Rich Environments

Christopher Lueg
University of Tasmania, Hobart, Australia
Thursday 16 July 2015
Research in allied disciplines suggests that failing to notice information that is actually present in an environment is not an exception but rather to be expected. The specific characteristics of human bodies along with the cognitive and perceptual systems that have co-evolved with these bodies are such that humans perceive only a fraction of the information that is potentially perceivable. In this talk Professor Lueg argues that the way we see things matters if we want to ensure that we don't design information systems and interactions for "people like us" given that quite often, we [academics] are not exactly representatives of our intended audiences. Related concepts originating from the cognitive sciences including embodiment and scaffolding will also be discussed since they can be used to inform the design of artifacts. It is worth noting that Professor Lueg understands human-computer interaction as interaction with pretty much any kind of computer-based artifact ranging from desktop computers and smartphones to washing machines and parking meters.

 

Improving Course Relevance: Techniques for Incorporating the Social Value of Computing into your Courses [Seminar/Workshop]

Mikey Goldweber
Department of Computer Science, Xavier University, Montgomery, Ohio
Tuesday 9 June 2015
We begin by asking to what degree does a given undergraduate curriculum either reinforce impeding myths and misconceptions about computing or work to dismantle them. Particular focus should be placed on motivating examples and programming projects throughout the curriculum, with special focus on the introductory courses. Students' perceptions about computing, if they have a realistic opinion regarding computing at all, is strongly correlated with the programming projects they or their peers have worked on. Is computing seen as boring (duck counting), not serious (games), or maybe solely focused on business/commerce?

Research informs that students, particularly with respect to selection of major, seek to satisfy their values over their interests. It is time that computing curricula align accordingly. This workshop introduces "Computing for the Social Good: Educational Practices" (CSG-Ed). CSG Ed is an umbrella term meant to incorporate any educational activity, from small to large, that endeavors to convey and reinforce computing's social relevance and potential for positive societal impact.

The goal of this workshop is to equip computing educators with a set of techniques to create new CSG-Ed oriented assignments, or to repurpose old ones. From a first day activity in the first computing course, to the capstone experience, the opportunity exists to showcase in an integrated way the social value of computing across a broad spectrum of fields.

In the afternoon there will be an optional additional session, provided enough interest, for faculty to work with each other and/or the workshop facilitator on specific courses and/or assignments.

Bio: Mikey Goldweber has been a computing professor in the USA since 1990. He works primarily in computing education with foci in curriculum design, introductory programming, and pedagogic operating systems. For the past four years Mikey has been one of the driving forces behind the "Computing for the Social Good" movement in the computing education community. Finally, Mikey is the current Chair of the ACM Special Interest Group on Computers and Society (SIGCAS).

 

Scalable text mining with sparse generative models

Antti Puurula
Department of Computer Science, The University of Waikato, Hamilton
Monday 8 June 2015
About the talk: Antti will tell us about his PhD research, prior to his thesis oral.

The information age has brought a deluge of data. Much of this is in text form, insurmountable in scope for humans and incomprehensible in structure for computers. Text mining is an expanding field of research that seeks to utilize the information contained in vast document collections. General data mining methods based on machine learning face challenges with the scale of text data, posing a need for scalable text mining methods.

This thesis proposes a solution to scalable text mining: generative models combined with sparse computation. A unifying formalization for generative text models is defined, bringing together research traditions that have used formally equivalent models, but ignored parallel developments. This framework allows the use of methods developed in different processing tasks such as retrieval and classification, yielding effective solutions across different text mining tasks. Sparse computation using inverted indices is proposed for inference on probabilistic models. This reduces the computational complexity of the common text mining operations according to sparsity, yielding probabilistic models with the scalability of modern search engines.

The proposed combination provides sparse generative models: a solution for text mining that is general, effective, and scalable. Extensive experimentation on text classification and ranked retrieval datasets are conducted, showing that the proposed solution matches or outperforms the leading task-specific methods in effectiveness, with a order of magnitude decrease in classification times for Wikipedia article categorization with a million classes. The developed methods were further applied in two 2014 Kaggle data mining prize competitions with over a hundred competing teams, earning first and second places.

 

Non-parametric Methods for Unsupervised Semantic Modelling

Wray Buntine
Monash University, Melbourne, Australia
Monday 8 June 2015
This talk will cover some of our recent work in extended topic models to serve as tools in text mining and NLP (and hopefully, later, in IR) when some semantic analysis is required. In some sense our goals are akin to the use of Latent Semantic Analysis. The basic theoretical/algorithmic tool we have for this is non-parametric Bayesian methods for reasoning on hierarchies of probability vectors. The concepts will be introduced but not the statistical detail. Then I'll present some of our KDD 2014 paper (Experiments with Non-parametric Topic Models) that is currently the best performing topic model by a number of metrics.

 

Prototyping Mobile Experiences Fast and Effectively

David Mannl
Mobile UX Architect, Fairfax Media, Auckland
Tuesday 21 April 2015
Learn all about the tools, techniques and methodologies on how to rapidly turn an app idea into an interactive prototype that runs on an actual device. This session is targeted at designers and developers of all skill levels. All other students are welcome to join as well. Do you have the next billion dollar app idea just waiting to be pitched to the world?

 

Smartness is Suddenly Everywhere

Farhad Mehdipour
Kyushu University, Japan
Wednesday 8 April 2015
This talk gives an overview of emerging computing paradigms, key enabling technologies for realizing them, and a few example applications. The emphasis will be on solutions that integrate computing and networking systems to the physical world being sensed and controlled by sensors and actuators. A big picture of a smart computing system including its different components and requirements associated with the type of target application is presented. It will be shown how the growth of the system may lead to an ever-increasing data stream and big data emergence. Moreover, two example applications including a smart system for environment monitoring and the Smart Grid will be introduced and compared with respect to the variant smartness essentials. Challenges and solutions for developing components of those systems including an energy-efficient wireless sensor network, as well as data processing, mining and knowledge discovery systems will be discussed throughout this talk.

 

Dynamic impact assessment for disruption-aware intrusion alert prioritization and response selection

Harris Lin
Ames Laboratory, U.S. Department of Energy, Iowa State University, USA
Tuesday 17 February 2015
Striking the balance between cyber security and convenience has been a long-standing challenge: while a strict security policy prevents intrusion, it may severely disrupt critical services of an organization through automated blocking. Traditionally, static whitelists are manually maintained to capture such critical services which should not be interfered with, or which are trusted out of necessity. However, it is extremely difficult to be accurate and exhaustive as in reality they change over time. In this talk we discuss a machine learning based approach that aims to model the relationship between the target organization and the external resources, and dynamically suggest changes to the whitelists. We extract features from network flow summaries, bipartite graph analysis, and the contents of web pages crawled from the hostname of the resources. The resulting model trained using the WEKA data mining framework could also be used as part of intrusion alert prioritization, response selection, as well as exfiltration discovery.

 

Collaboration opportunities with Imersia

Roy Davies and Michael Rinck
Imersia Ltd, Auckland
Friday 13 February 2015
Dr. Davies will give a 30 minute presentation on the challenges and opportunities Imersia faces right now. Afterwards he would like to explore collaboration avenues with researchers from Waikato, especially from the fields of:
  • Artificial Intelligence: Reinforcement Learning and Natural Language Processing,
  • Cloud Computing: Security and Scalability,
  • Mobile Recommender Systems, and
  • Mobile Augmented Reality.
Possible examples are:
  • Student Projects (as in Comp314),
  • Student Internships (both undergrad and post-grad level), and
  • Grad Student Projects (specifically Ph.D. level, funding is an option).
Collaboration on higher levels is a goal for the long term, such as joint applications for the MBIE smart ideas fund.

 

Unlocking the Secrets of 4.5 Billion Pages: An HathiTrust Research Center Update

J. Stephen Downie
University of Illinois at Urbana-Champaign, USA
Tuesday 20 January 2015
This seminar provides an update on the recent developments and activities of the HathiTrust Research Center (HTRC). The HTRC is the research arm of the HathiTrust, an online repository dedicated to the provision of access to a comprehensive body of published works for scholarship and education.

The HathiTrust is a partnership of over 100 major research institutions and libraries working to ensure that the cultural record is preserved and accessible long into the future. Membership is open to institutions worldwide.

Over 12.5 million volumes (4.5 billion pages) have been ingested into the HathiTrust digital archive from sources including Google Books, member university libraries, the Internet Archive, and numerous private collections. The HTRC is dedicated to facilitating scholarship using this enormous corpus through enabling access to the corpus, developing research tools, fostering research projects and communities, and providing additional resources such as enhanced metadata and indices that will assist scholars to more easily exploit the HathiTrust corpus.

This lecture will outline the mission, goals and structure of the HTRC. It will also provide an overview of recent work being conducted on a range of projects, partnerships and initiatives. Projects include Workset Creation for Scholarly Analysis project (WCSA, funded by the Andrew W. Mellon Foundation) and the HathiTrust + Bookworm project (HT+BW, funded by the National Endowment for the Humanities). HTRC’s involvement with the NOVEL text mining project and the Single Interface for Music Score Searching and Analysis (SIMSSA) project, both funded by the SSHRC Partnership Grant programme, will be introduced. The HTRC’s new feature extraction and Data Capsule initiatives, part of its ongoing work its ongoing efforts to enable the non-consumptive analyses of copyrighted materials will also be discussed. The talk will conclude with a brief discussion of the ways in which scholars can work with and through the HTRC.

 

Events Index