|About the Conference|
|Suits and Geeks|
|Call for Participation|
|Call for Submissions|
The following papers will be presented at the 2007 New Zealand Computer Science Research Student Conference.
Algorithm Engineering, Digital Libraries, Artificial Intelligence, Machine Learning, Knowledge Engineering
We describe how conventional automatic keyphrase indexing with domainspecific thesauri can be adapted to indexing with the lexical database WordNet. The goal of indexing is to determine the main topics of a document. WordNet organizes words into synonymous groups called synsets that represent singe concepts. The proposed algorithm first maps document phrases onto WordNet synsets and then determines the most significant by exploring their statistical and semantic properties. To evaluate the algorithm we compare keyphrases assigned manually by human indexers with terms assigned automatically by the original thesaurus based keyphrase indexing algorithm and by the one adapted for indexing with WordNet.
Artificial Intelligence, Knowledge Engineering
Intelligent Tutoring Systems (ITSs) provide an ideal environment for coached learning. A goal in ITS development is to maximise effective learning which provides the motivation for this research. This paper proposes the notion of problem templates (PTs): mental constructs used by experts to retrieve large amounts of domainspecific data to solve a problem. This research aims to examine the validity of such a construct and investigate its role in regards to effective learning within ITSs. After extensive background research, an evaluation study was performed at the University of Canterbury in which PTs were created in Structured Query Language (SQL) and were used to model students, select problems, and provide customised feedback in the experimental version of SQLTutor, an Intelligent Tutoring System. The control group used the original version of SQLTutor where pedagogical (problem selection and feedback) and modelling decisions were based on constraints. Prior research, reallife examples, and preliminary results show that such a construct could exist; furthermore, it could be used to help students attain high levels of expertise within a domain. Students using the template based ITS showed high levels of learning within short periods of time. The author suggests further evaluation studies to investigate the extent and detail of its effect on learning.
Digital Libraries, Human Computer Interaction
Page turning is an important, yet invisible, part of paperbased document navigation. However, this affordance is not easily reclaimed in a digital setting. The interaction often becomes interruptive, rather than an unselfconcious act. Providing a literal representation of page turning may help this navigation to be seamlessly integrated into the flow of readers regular activites, and people can transfer their paperbased document navigation skills to the digital world. This paper presents an overview of three page turning techniques for electronic books that are sufficiently realistic, scalable and computable in real time. A summary of the main features of each technique will be presented at the end of this paper.
We present a project with the goal of developing a general model of selfexplanation (SE) support, which could be used in both welland illdefined instructional tasks. We have previously studied how human tutors provide additional support to students learning with an existing intelligent tutoring system. Although the tutors were not given specific instructions to initiate/facilitate SE, there were instances when SE support was provided. Analysis of these interactions indicates that they have helped the students to improve their understanding of database design. On the basis of these findings, we developed a selfexplanation model, which we present in this paper.
Numerical approaches for representing and reasoning about information struggle when the data is too imprecise or uncertain. People on the other hand cope very effectively with vague information in daily life. This has motivated the field of qualitative reasoning (QR), which focuses on coarse, qualitative distinctions between objects and relations. A substantial body of work has emerged from the QR community, however, there is a lack of unifying principles for relating the various QR approaches, and it is not always clear when and how QR should be applied. These issues must be addressed before QR can be properly integrated into standard engineering tools and practices. In this paper the author?s PhD programme is outlined, covering (a) the research aim of developing a framework for supporting the design and implementation of QR solutions, and (b) the research approach, which is based around the performance of case studies, two of which are discussed.
Artificial Intelligence, Machine Learning
This paper presents the design and implementation of JavaITS, a constraintbased intelligent tutoring system for teaching the Java programming language. In order to learn programming, a student must acquire new cognitive skills, which when coupled with having to also learn the syntax of a particular programming language (necessary to apply a practical context to this skill), can make the process overwhelming. Even if a student can understand programming at a microlevel, to be a better programmer they must be aware of the overall design and context of a program, a useful skill that is often an afterthought. The goal of our project is to make the process of gaining programming skill both accessible through smoothing the learning curve, and relevant (from a practical perspective), such that transfer problems are reduced.
Human behaviour consists of more than simple, conditioned responses to stimuli in the environment. Actions are influenced by intentions because plans are consciously formulated to reach certain goals. Each stimulus may also be associated with more than one plan or goal. I am developing a neural network model of how plans compete in the brain to influence behaviour.
Digital Libraries, Data Mining
This paper describes a new technique for obtaining measures of semantic relatedness. Like other recent approaches, it uses Wikipedia to provide a vast amount of structured world knowledge about the terms of interest. Our system, the Wikipedia Link Vector Model (or WLVM) is unique in that it does so using only the hyperlink structure of Wikipedia rather than its full textual content. To evaluate the algorithm we use a large, widely used test set of manually defined measures of semantic relatedness as our benchmark. This allows direct comparison of our system with other similar techniques.
Software Engineering, Graphics
We have a software visualisation architecture that requires tools to develop visualisations from XML execution traces and integrate the visualisations into user's web environments. Most existing web software visualisation systems create 2D visualisations and if they do use 3D they are using technologies that are outdated, not designed for the web, and hard to extend. We are building a tool that transforms XML execution traces into X3D ? the Web3D Consortium's open standard for web 3D graphics ? web enabled visualisations and exploring how suitable X3D is for use in software visualisation. Our tool and visualisations will help developers to understand the structure and behaviour of software for reuse, maintenance, reengineering, and reverse engineering.
Computer Vision, Image Processing
Previously, the Logit-Logistic Fuzzy Colour Constancy (LLFCC) algorithm has been developed and tested to improve colour object classification significantly by compensating for the effects of variations in illumination conditions. However, the implementation was only achieved in serial, and requires tedious hand calibration to extract the colour contrast rules used for optimal colour classification. Furthermore, as colour calibration for object detection entails repetitive colour classification and tuning of the colour descriptors, running these algorithms in serial proves to be extremely slow. Handcalibrating the rules involves trial and error runs, and can take up to 1 hr to complete. In light of these problems, this paper presents a novel parallel technique which dramatically improves the speed of the LLFCC algorithm, as well as automates the colour contrast rule extraction system.
Information Systems, Artificial Intelligence, Knowledge Engineering, Computer Vision
The current level of development in Intelligent Tutoring Systems (ITS) ensures successful cognitive support. However, a number of studies suggest that learning outcomes are significantly influenced by a complex interaction between cognitive and affective states of learners. Little research has been done to investigate the effectiveness of learning with the help of affectaware ITSs. Recently used approaches to affect recognition rely on facial feature tracking and physiological signal processing, but there is no clear winner among them because of the complexity and ambiguity associated with the task and the lowlevel data interpretation. The goal of our project is to develop a robust way of affect recognition to create affectaware pedagogical agents in order to improve users? engagement, motivation and learning outcomes.
Human Computer Interaction, Pattern Recognition
InkKit is a diagramming sketch tool designed for use on Tablet PC?s. The most important part of InkKit is its ability to reliably recognise handdrawn diagram components. This is made difficult by the presence of both geometric shapes and characters in diagrams. The goal of our research is to improve sketch recognition by improving Inkkit?s accuracy in grouping and classifying strokes in a diagram into text characters and shapes. We have done this by identifying the most significant features of strokes that can be used to distinguish shapes from text using a decision tree based partitioning technique. Implementation and evaluation of this new ?shape divider? using these features against InkKit?s existing divider and the Microsoft divider has shown that our divider is more accurate at dividing text and shape strokes and can therefore improve overall sketch recognition.
Information Systems, Distributed Systems
This paper gives a synopsis of the author's PhD project. In order to address a broad range of conference participants, we have specifically decided to give a general, rather nontechnical outline of the tackled research problems. This decision is reflected in the overall structure of this paper:
After introducing the general research area and the context of the project, we outline the three research problems that are addressed within the associated dissertation. We then give a conceptional overview about the research that has already been carried out but also about the remaining steps that need to be undertaken. We conclude this paper by stepping back from the actual dissertation and relating its contributions to the broader research community.
System Usability, Information Systems, Artificial Intelligence
Building effective learning tools is an art that can only be perfected by a great deal of explorations involving the tools? audience: the learners. This paper focuses on accounting for the learners? spatial ability as well as providing an additional help channel in Intelligent Tutoring Systems. We modified ERMTutor, a constraintbased tutor that teaches logical database design, to provide not only textual feedback messages, but also messages containing combinations of text and pictures, in accordance with the multimedia theory of learning. We also added a questionasking module which enables students to ask freeform questions. Results of preliminary studies performed show a promising indication for further explorations. We plan to use these results as the basis for another evaluation study in early 2007.
Human Computer Interaction
Many layouts have been proposed for keyboards since the invention of the typewriter in the 1800's. The two most popular formats, QWERTY and Dvorak, have been at the centre of a great debate for decades ? both layouts are claimed to be optimal. Academic evidence is available supporting each layout, yet over 75 years since the debate began there is still no clear answer. This paper discusses the history behind each format and presents new scientific experiments designed to give a final determination of the optimal layout.
Algorithm Engineering, Network Research, Hardware Research
This paper explores the concept of wireless sensor networks and discusses the focus of our research. We introduce the reader to what a wireless sensor network is, how it works, and also give some example applications of wireless sensor networks. The paper then identifies the focus of this research as a wireless sensor network design system for agricultural applications, and discusses some of the ideas involved.
Network Research, Internet Security
The combination of 3G and WLAN wireless technologies offers the possibility of achieving anywhere, anytime, always best connected services, bringing benefits to both endusers and service providers.
The motivation for these heterogeneous networks arises from the fact that no single technology, service or architecture can provide ubiquitous coverage and high throughput across all geographical areas. Also, the mobility requirements of mobile users changes with various scenarios. Such users typically want to connect to the public or private networks most convenient to them at the time of connection. However, every switching between interfaces may result in the loss of data packets resulting in network congestion with additional load on the network traffic.
We present a versatile mobility solution, which accommodates different interfaces with different levels of security and authentication, and could be deployed as and when required. We have also proposed and analysed an automated algorithm for optimised handoff to integrate different heterogeneous wireless networks. Introduction of link quality, hysteresis effect and dwell timers into the interface selection algorithm optimises the handoff initiation time as well as selection of the most optimal network. We present performance analysis to validate our architectural approach.
Information Systems, Network Research
Wireless networks have enjoyed explosive growth since their introduction in the late 1990s. However, of the various forms of wireless protocols available, only a few have become universally accepted. One reason for the ubiquity of one protocol over another is the perceived robustness of the security built into the protocol. As WiMax is poised to be unleashed worldwide within the next 12 months, already security concerns are beginning to appear. My proposed research will closely examine the encryption key management protocols of 802.16 used in mesh networks, and using simulation show that there are indeed some security flaws in the protocol. My research will then involve using simulation to model modifications to the existing protocols and show that my design may be used to increase the security effectiveness of the protocol and answer my research question: ?What constitutes an effective but efficient solution to the security problems in 802.16 mesh networks?
Information Systems, Internet Security
One dimension of Internet security is Web application security. The purpose of this Designscience study is to design, build and evaluate a computerbased tool to support security vulnerability and risk assessment in the early stages of Web application design. The tool will facilitate risk assessment by managers and will help developers discover vulnerability in system requirements by providing a means for calculating potential losses and graphically visualizing risk levels of different system components. This represents a proactive approach to building in Web application security at the requirements stage as opposed to the more common reactive approach of putting countermeasures in place after an attack and loss have been incurred. The primary contribution of the proposed tool is its ability to make known securityrelated information (e.g. threat trees, countermeasures) more accessible to developers and to translate lack of security measures into potential dollar losses for managers so they can prioritize security spending.
Algorithm Engineering, Formal Methods, Distributed Systems
Nonblocking concurrent data structure implementations are complex and hard to reason about. We are investigating how model checking tools can be used to find errors in, and to verify properties of, these algorithms.
Formal Methods, Software Engineering, Human Computer Interaction
When we design and implement software systems there are many different approaches we can take. We may choose a formal approach, where we formally specify the system and prove properties about its intended behaviour before refining to an implementation. Conversely, we may take a totally informal approach where we plan our system by jotting ideas down on paper, discussing ideas with users, drawing sketches etc. Formal methods are naturally suited to modellign underlying system behaviour while usercentred approaches to user interface design fit comfortably with more informal approaches. In order to develop systems which benefit from both of these approaches and recognise their respective uses for different parts of the design process we need to find ways of integrating usercentred design methods with formal methods. My research addresses this problem, and in particular looks at ways of including informal designs within a formal process and examining these in relation to standard notions of refinement.