ACM Symposium on Applied Computing
Data Streams Track

in conjunction with ACM Symposium on Applied Computing:
The 27th Annual ACM Symposium on Applied Computing
in Trento University, Italy, March 25-29, 2012.

1 Motivation

The rapid growth in information science and technology in general and the complexity and volume of data in particular have introduced new challenges for the research community. Many sources produce data continuously. Examples include sensor networks, wireless networks, radio frequency identification (RFID), customer click streams, telephone records, multimedia data, scientific data, sets of retail chain transactions, etc. These sources are called data streams. A data stream is an ordered sequence of instances that can be read only once or a small number of times using limited computing and storage capabilities. These sources of data are characterized by being open-ended, flowing at high-speed, and generated by non stationary distributions.

Data streams are increasingly important in the research community, as new algorithms are needed to process this streaming data in reasonable time. Many researchers coming from different areas (data mining, machine learning, OLAP, databases, etc.) are designing new approaches or adapting some of the traditional algorithms to data streams. The number of researchers in this field also is growing considerably, and in many conferences data streams are becoming a consolidated topic (ICML, KDD, IJCAI, ICDM, SAC, ECML, etc).