>>> EURASIP Journal on Applied Signal Processing Special Issue on Anthropomorphic Processing of Audio and Speech Anthropomorphic systems process signals "at the image of man." They are designed to solve a problem in signal processing by imitation of the processes that accomplish the same task in humans. In the area of audio and speech processing, remarkable successes have been obtained by anthropomorphic systems: perceptual audio coding even caused an MP3 hype. At first sight, it could seem obvious that the performance of audio processing systems should benefit from taking into account the perceptual properties of human audition. For example, front-ends that extract perceptually meaningful features currently show the best results in speech recognizers. However, their features are typically used for a stochastic optimization that is itself not anthropomorphic at all. Thus, it is not obvious why they should perform best, and perhaps the truly optimal features have not yet been found because, after all, "airplanes do not flap their wings." In general, we believe that there are several situations when an anthropomorphic approach may not be the best solution. First, its combination with nonanthropomorphic systems could result in a suboptimal overall performance (the quantization noise that was cleverly concealed by a perceptual audio coder could become unmasked by subsequent linear or nonlinear processing). Second, other than anthropomorphic approaches might be better adapted to the technology that is chosen for the implementation (airplanes do not flap their wings because it is technically much more efficient to use jet engines for propulsion). Nevertheless, a lot can be learned from imitating natural systems. As such anthropomorphic and, by extension, biomorphic systems can be considered to play an important role in the process of developing new technologies. The aim of this special issue is to bring together papers from different areas of audio and speech processing that deal with aspects of anthropomorphic processing or in which an anthropomorphic or perceptual approach was taken. Papers with a research nature, review papers, and tutorial papers will be considered, provided that they are unpublished. Topics of interest include (but are not limited to): o Speech and Audio Coding o Audio Measurements and Speech Analysis o Objective Quality Measures for Audio and Speech o Speech Synthesis (Rule-based, Articulatory, ...) o Audio Virtual Reality o Content-Based Music Search o Music and Instrument Recognition o Audio Classification and Retrieval o Speech and Speaker Recognition Authors should follow the EURASIP JASP manuscript format described at the journal site http://www.eurasip-jasp.org/. Prospective authors should submit an electronic copy of their complete manuscript through the EURASIP JASP's manuscript tracking system according to the following timetable. Manuscript Due November 1, 2003 Acceptance Notification April 1, 2004 Final Manuscript Due August 1, 2004 Publication Date 4th Quarter, 2004 GUEST EDITORS: Werner Verhelst, Vrije Universiteit Brussel, Belgium; wverhels@etro.vub.ac.be Jrgen Herre, Fraunhofer IIS-A, Germany; hrr@iis.fhg.de Gernot Kubin, Technical University Graz, Austria; g.kubin@ieee.org Hynek Hermansky, Oregon Health & Science University, USA; hynek@ece.ogi.edu EDITORIAL BOARD REPRESENTATIVE: Soeren Hold Jensen, Aalborg University, Fredrik Bajers Vej 7, A3, DK-9220 Aalborg Oest, Denmark; shj@cpk.auc.dk <<< Please visit http://www.eurasip-jasp.org for more information about the journal. Request a free sample copy of the journal at the journal's web site. EURASIP JASP publishes as many issues as required based on the flow of high-quality manuscripts and current scheduled special issues. To submit a proposal of a special issue, please contact the journal's editor-in-chief. In order not to receive any future "EURASIP JASP" alert messages, please send an email to majordomo@eurasip-jasp.org with the following line in the body of the message unsubscribe asp dolinsky@gsu.by >>>