COMP241 Project List

Lust in Translation

Forced to study Shakespeare at high school? Found yourself wondering what all the excitement was about? Then this is the project for you!

The aim of this project is to develop a web resource that—by tapping in to the wide variety of source and related resources— makes its relevancy far more immediate.

Wikification of Shakespeare's plays; Identify specialised wiki for Shakespeare (??) and utilise Wikipedia Miner with that. Realistic Books of Shakespeare, with embedded links to YouTube; Social media notes gathered on one side of each page -- like the compsci maths text-book. Automatic text-mining of placements;

Practice Shakespeare -- again you tube (internet archive??) of plays, miss a part out .. you practice you lines. Doesn't play the next part until you have finished "keyphrase" at end of passage (speech recognition); automatic alignment of play to audio; Complete Works of Shakespeare, Abridged at must!!

To conclude, take a moment to view Hugh Laurie's take on Shakespeare.

Sample potential on-line resources

Space Cadet, Space Kinect

First-person, 3D spaceship flying game. The twist? You, as the pilot, fly the ship using gestures and voice commands detected by the Kinect box. Steering is just the beginning ... you'll also want to support firing those laser cannons also to knock out those pesky enemy spaceships that keep attacking you.

And if all that sounds too simple, how about targeting the game for a young audience, and allow them to design the gestures they want to use to control the ship. Keep an eye on the software design and you should also be able to develop the game so it works with more modest input equipment: a web camera, and the computers built in microphone. In this version, instead of gestures, the pilot might have (for example) a set of coloured cards they have made (one with a red square on it, another with a yellow dot, and so on), which they pick up and use in front of the web-cam to adjust particular controls to the ship

YouNewspaper

For me, the difference between reading the newspaper and watching a TV news channel is a bit like the different between listening to music on my iPod compared with listening to an audio cassette (tape). It's the random access vs. serial divide played out all over again, but in a different domain. I love listening to my iPod, and I love reading a newspaper. Listening to audio on tape in comparison is clunky, cumbersome, and slow to achieve what I want, and the same is true (for the same reasons) when when looking to digest news from a TV source.

Now imagine an application that delivers video-based news sourced from TV stations that is automatically compiled and delivered in the format of a newspaper. Your top interest is sport? Then flick to the back " page" of the video-newspaper and click on one of the "articles" you see there (formed out of a keyframe taken from the relevant video, and text based on a summary of the closed-caption text that is provided) to start watching that video clip.

OCRing and keyframe detection are fairly high computational tasks. The kind of thing you might want to throw at a cluster computer. No problem, I've arranged for the Smoke and Mirrors teams to have access to one of those, housed in the digital library lab.

Trade-Sea

In the annuals of computer games, Elite was an influential 3-D space trading games that caught the imagine of people. In this project, the idea is to develop the same style of trading but to set things on terra- ferma, where participants navigate sailing boats. This is, physically model sailing to a level where gamers really have to know how to sail and navigate.

Themes to explore: look into global data-site to see if the map of the world can be based on real data; like-wise weather data and/or wind speeds around the globe. Consider setting the game as a "real-time" game (given the current panache for this style of game) where if you want to sail from Liverpool to New York the trip takes as long as if you were to actually make the trip yourself—---the voyage should be plenty exciting given the other ships (both computer automated and other traders in the game) you meet on the way.

Where Do You Think You Are?

GPS aware mobile phone app combined with a spatial digital content management system. Capture knowledge expressed in situ in whatever form is most convenient for the user: record them speaking, let them take photos or video with their smart phone; then, later docking with the home PC (or else networks), allow the captured information to be edited and embellished, and then transferred to a dynamically evolving digital portal to the aggregated information.

Consider looking at a technology like PhoneGap to develop the mobile application across a range of handheld platforms; consider the open source digital library software Greenstone as a base technology for the portal, enhanced with features of social media.

A TIPPLE with (Augmented) Reality

Tipple is research project in the department where we have been exploring the confluence of two forms of information source: a Tourist Information Provider (TIP) information system, and a spatially aware Digital Library (DL). We call the resulting hybrid, Tipple as 'TipDL' is a bit of a mouthful.

The nett result is a mobile applications that let's you know about places of interest, when you are nearby. It is best experienced on a mobile phone (Android-based), walking around a location such as the Hamilton Gardens, where—upon reaching a location of interest—the phone plays a double sonar ping to let the person know there is something of interest near by; from this, if they so choose, audio is read out to the them (in the gardens, it is a transcribed version of the walking tour staff give visitors). While not as interesting as the mobile app side of things, you can at least get a taste for how the content is stored on the digital library side of things by visiting the following link.

So why is the project called "A Tipple with Augmented Reality"? Simple, the idea of this Smoke and Mirrors project is to further develop the idea so the user can move their phone around in a video-camera style mode and use Augmented Reality (AR) techniques to add in additional information to help them understand what they are looking at.

Digital Jabberwocky

There a game I used to play as a kid called "silly monsters". Perhaps you played something like it too. The game works with two players, but is even better when there are more. At the start of the game each person is equipped with a piece of paper, typically a strip roughly 6 cm wide torn from an A4 page. In the first round of the game, each person draws a head of a monsters. They then fold the paper over, with just the neck showing and pass the strip of paper to the player on the left. In the second round, everyone draws the body (and arms) of a monster, and repeat folding the paper over (this time with just the start of the legs showing) and passing it on. The next round is for the legs, and then a final round for the feet. Finally everyone unfolds the strip of paper to see what the silly monster they have, and then let others see.

The aim of this project is to develop a digital version of this game, but with lots more bells and whistles! For instance in a digital version, there could also be a round where the person chooses the sort of voice the monster is to have, perhaps even what it is going to say. With enough care in how the drawing of a body part is done in the application, it should also be possible to animate the monster.

A fun web site that might inspire you—in particular around some ideas on how to support animation of a 2-d picture—is drawstickman.com

Spotlight To The Max!

An all to commonly encountered situation: you are reading a help file for a particular application (because you cannot figure out how to do something) and it asks you to select a particular menu option ... but you just can't find it anywhere in the hierarchical stack of menu options! Now imagine the menu bar to your application has a search box embedded in it— That's Spotlight To The Max. With it, your problem fade away ... type the name of the interface option you are looking for (or a substring of it) into the search box, and Hey Presto, the application goes straight to that part of the interface for you. You've found what you were looking for, and can move on with the actual things you want to do, rather than get frustrated! @

Now dig a bit deeper, and see if you can

Suggested implementation pathway: develop this form of Deep Search in (say) the GDK desktop environment by replacing its DLLs (Shared Object Libraries).

Now add in the idea of "Live Pause and Rewind" made popular by TV set-top boxes such as MySky and Tivo. Played out in a desktop environment, the idea of this feature is to let a user go back and see something they did earlier. Been looking at a web page, then decided to close the tab because you'd got what you needed from it ... only to discover half an hour later you want to have a look again but can't remember the query terms you had used to find it. No problem, just rewind to earlier to that earlier part of your interaction with the desktop and press pause to read the page.

But we're still not done yet. Return to the idea of modifying the DLLs the provide the desktop/windowing environment and dig a bit deeper. You should be able to patch in to the method calls that render text on the screen—methods of the form DrawString(GC,x,y,text); Now you can provide a text-based search feature to your live-rewind video feature to help users construct more sophisticated ways of going back in time to review something they had done earlier.

Socially Driven: Everything I Do is Driven By You

The idea: Apply crowd-sourcing techniques to NavMan style navigation

OpenStreetMap based:

  • easy route (based on previous routes driven);
  • car-parking data, cctv cameras blended in;
  • beep when getting to corner too fast;
  • support for keying in local names for places .. e.g. the Tron!

Not just about car routes ... think about pedestrian routes also. Common routes walked, say on the waterfront of Wellington, for example. Think about privacy issues (such as a route is only published after X different people have walked it, or at least have shared the same segment.

DIY Streetview

Being there is (not) everything!

The idea to this project is to develop a DIY StreetView capability and combine it with the findings of an archaeology dig (or alternatively what is historically understood about a site). It would let people virtually visit locations that they might otherwise never get to see, and more importantly explore around the site and learn about what the excavation has discovered (or scholars have reconstructed about the historic event that took place at that location).

A local example where such a system could be applied to is the attack of the pa at Ruapekapeka during the Maori Wars by the British forces. In addition to the StreetView capability, a model of the pa could be developed using Sketch-up (for example) and used to augment the environment, along with landmarks capturing various events as the battle developed. Alternatively, a prepared "tour" of the site could be developed, with audio commentary. If implementing this aspect of the project, I particularly like the capability developed in the World Telescope project to break out of the tour at any point and go walk- about yourself, returning the the (paused) tour when you are ready.

For this project to fly it really needs at least one person in the team who really loves messing around with gadget—wires, cables, the lot—and making it do things it was never intended to do.

Doing a "Street View" of the university would be a great high-profile example of applying the developed software. As another way to spin this, don't think so much about streets but think in terms areas. Working in this direction with a bit of augmented reality thrown in would allow you to the concept of e-Archaeology (that's a terms that I just made up!) where the user could virtually visit an historic site, and see how it was in times gone by. The Maori Par at Ruapekapeka is an example of a local location where this could work well.

In undertaking this version of the project, a natural extension would be to support people actually being there, and as they walk around the site, they can hold up their phone and "see" what the location would have looked like.

Mashup: Telling You Where To Go

Looking to go on a trip? Configure with your details (how many in your family, how far you are prepared to travel, etc, what your interests are) and let the system propose some recommendations. In the making the decision, the software could factor live weather forecasts (would make a difference if going to the beach was a good idea or not, current tides that sort of thing); and troll social media sites to develop a corpus of information that would help develop a recommendation that would be suitable to you.

Scope to apply Machine Learning? Semantic Web technologies?