COMP241 Project Ideas

s

After Effects Expeditee

Project Manager: Robert Burke:
Team members: Chris Thresher; Michael Washer; Scott Anderson; Ben Cartner;

Project idea: Spatial Hypermedia Video Editing and Annotation environment

Details: Develop video editing capabilities that parallel existing audio editing capabilities. Focus on an area of use, for example: as a video editing suite (aka AfterEffects), as a way of interacting with course lectures that have been videoed, or to support realtime video conferencing.

Some very rudimentary support for playing videos was added to Expeditee over the summer. If requires Java 1.8 (available in beta form) and makes use of JavaFX.

It would also be possible to look at the detection of repetitive graphical actions (one or the original projects that didn't make the final cut as a team-project) within the scope of this project.

  • Java coding, and in particular, JavaFX video
  • See the Apollo (music) project by Brook Novak described in this article
  • Installer and source code resources

Variations: allow crowd-source annotations; augment linking between resources using Near Identical Image Detection (typically coded in C/C++); support live video conferencing;

Not What You See 3D ... Boo!

Project Manager: Steven Mason
Team members: Vincent Lamont; Cameron Paddy; Ryan Gilbert; Mitchell Davies

Project idea: develop a game that uses eye-tracking software, where the game-play element comes from when the user looks at something on the screen, it quickly disappears. (Spin off project idea from TIM Meets Einstein above).

2D or 3D?

There are quite a few open source projects that provide eye-tracking and gaze following capabilities: Google search

Variation: use a 3D games environment? Use Oculus Rift head gear (note someone in the team would need to have access to such gear as it looks unlikely, through, the university to be able to source such gear in time for the start of the project).

Project idea: the above project idea but in 3D with a ghoulish twist.

Eye-tracking software could potentially work on this (if you setup was, say, projecting onto a large screen). An alternative would be to look at using Oculus Rift virtual reality head gear. Note: someone in the team would need to have access to such gear as it looks unlikely, through, the university to be able to source such gear in time for the start of the project.

Bolt Up: Kiwi-style Home Security

Project Manager: Jawwad Chaudhry
Team members: Pritpal Singh Chahal; Sheena Mira-ato; Abdul Vahman; Nathan Rendle

Project idea: inexpensive home security, kiwi style. built around a mixture of Raspberry Pi and low-cost Android phones available. For example: enable web cameras to be hooked up and used as motion sensors; have the ability to log in remotely and switch lights on ...

Suggested focus: using the assembled gear to provide a more plausible, human feel. For example, rather than play some music when an outside motion sensor detects movement (done before), how about a solution where sounds that naturally occurred in the house previously have been recorded (automatically by the system) and is used as the audio that is played back. This could even be keyed to the time of day the sounds occured: sounds of preparing dinner, if around 6-7pm, sounds of watching telly, playing computer games (whatever!) if later on. A similar thing could be done, taking into account usage of lights through the evening.

  • Cyber-Security
  • Raspberry Pi
  • WeMo (from Belkin)
  • eKameno
Prof Mark Apperley has a research interest in this area, and has offered to by some IP IEC Power Controllers to help out with the project.

Smarten Up: A Kiwi-style Smart Home

Project Manager: Mathew Andela
Team members: Aaron Davison; Ryan Jones; Ben Robinson; Aflah Bhari

Project idea: inexpensive smart home development, kiwi style built around a mixture of Raspberry Pi and low-cost Android phones available. Given that it's not going to be possible to build a Smart Home in 6 weeks (!), for the backbone to this project I'm picturing an interactive visual of the home displayed by a central computer, along with a few representative gadgtes that have been developed by the team and are plugged in and operational.

Feeding off this idea of a central interactive display, I rather like the idea of this central control component being a Raspberry Pi (rather then a Desktop PC) that is plugging into the main TV in the home (say using an HDMI cable, giving both visual and audio output). There are a few web sites around that detail setting up a Raspberry Pi to be voice activated (i.e., you get to speak to your control centre and tell it what you'd like done in your Smart home) as well as Text-to-Speech for output. A nice touch would be to use one of the cheep Android phone to allow voice activated commands to be issued wherever you are in the house ... or even wherever you are.

Example uses of the central system include explicitly controlling lighting and temperature in rooms (especially other parts of the house) as well as house hold appliances (think white-ware). Settings can either be for action immediately, or else to occur at a specified time, or perhaps even when a specified set of contraints are true (such as the dishwasher being on when you are out of the house, as a socially directed objective). Alternatively, the central control system could be making decision for you: again, looking to optimize social factors, but equally to optimize the running costs of the house.

How does the system know when you are lilkely to be out of the house? Rather than developing a bespoke sub-component to the system for entering calendar information and other details about your daily life (and where and when exceptions occur), an intriguing possibility is to gain leverage from Semantic Web technologies, so the developed Smart Home system can make "sense" of information you already enter elsewhere in your digital life.

  • Semantic Web
  • Raspberry Pi
  • Android
  • WeMo (from Belkin)
  • eKameno

Torrent TV

Project Manager: Steven Crake
Team members: Jordan Crawford; Lucas De Castro; Thye Way Phua; Peter Oormen

Over in the UK the British library has started the continual recording of 22 TV stations for archiving purposes, with a focus is on current affairs. Their solution uses some pretty high-tech, high-spec computer equipment. The idea behind Torrent TV is to look to achieve the same thing, but adopting a crowd-sourced approach to capturing the TV content.

You could think of the project as a crowd-sourced personal video recorder (PVR), if that isn't a contradition in terms! If taking this as the focus for the Smoke and Mirrors project, then this would shift emphasis away from the need to have a continually expanding archive, to one that concentrates on keeping a fixed-sized sliding window of TV content available (say the last month of TV). Here, enriched crowd-sourced ideas could include: letting users mark the precise beginning of a show so later users can start watching the show at just the right point; and providing sub-titles and/or translated sub-titles (stimulated, for example, through an incentive scheme?)

  • DVB Dongle
  • Video processing
  • Electronic Program Guide (EPG) decoding
  • Peer-to-Peer protocol, for example Bit Torrent
  • C# Source code to the ReplayMe! plugin for MediaPortal

Stop Press: The three DVB Dongles I had on order have now arrived. After a bit of trial and error, I got them working on a Windows PC with the VideoBlaze software that comes with the device, and also the open source project MediaPortal.

For BitTorrent, some interesting console/command-line options disucssed

Dynamic Junior Scrabble

Project Manager: Rafael Shunker
Team members: Matt de Waard; Jojo Stewart; Leo Xiong; Thomas Scanlon

There is a junior version of Scrabble that is fun to play (even as an adult!), and is based around the idea of a board that is already fixed with the words on it, and the game is player's taking turns to add two tiles to the board from their hand each time (which they then replenish from the pool at the end of their turn). If, through the act of placing a tile, a player completes one of the words, they get a token. If the tile complete two interlocking words, they get two tiles. At the end of the game, the player with the most tokens wins.

Project idea: Create a tablet-based version of this where the board that is created is different every time. In creating a fresh board the set of times the game needs to play with changes (note: the set of tile's isn't just the aggregation of all the letters on the board, there are around 4-5 more tiles than spaces on the board, and some of the element of skill comes from what those letters are). Another aspect that the project could develop is that the age of the players could be taken into account at the start, and the words chosen become more difficult the older the kids are.

  • Android/iPhone/Windows Phone/PhoneGap?