Música

Activity
Audio Hack Day powered by SoundCloud

CPEurope

SoundCloud is excited to be joining Campus Party Europe on August 23rd for a special Audio Hack Day hackathon. Audio Hack Day is a 24 hour hack event where people of all interests - developers, designers, or anyone with a good idea - get together to hack on the most innovative, coolest or quirkiest audio app possible.

Plan on hacking something? Please sign up for a seat and to be kept up to date about API presentations and prizes here: http://audiohackday.org/

SoundCloud is a social sound platform that makes it easy to share, broadcast, track and promote created sounds across social networks, websites, mobile devices or simply between friends. SoundCloud accounts are free to use, with more advanced creators able to upgrade to premium packages, featuring advanced statistics, controlled distribution and custom branding.

Conference
How an Italian Microcontroller is Changing Music

Becky Stewart

Electronic or computer music often evokes the image of the performer trapped behind a laptop. They have little opportunity to emerge from the glow of their screen, but an emerging generation of artists is striving to move away from their computers to bring back stage performance to their music - and Arduino is playing a key part. Arduino is an open source platform that makes programming microcontrollers and building custom hardware interfaces more accessible to those without formal electrical engineering training. In this talk we'll look at how this platform has changed how musicians create music, from the whimsical to the revolutionary, and how open source engineering is changing the face of pop music. After explaining what exactly an Arduino is and what it can do, we'll look at some exciting projects currently in development.

Speaker: Becky Stewart is a co-founder of Codasign, an interactive arts technology studio based in East London. She completed her PhD in acoustics and spatial audio with the Centre for Digital Music at Queen Mary, University of London in 2010 and is now interested in combining signal processing with physical computing. She currently is exploring how non-traditional physical interfaces like hand-knit objects can be used to create and explore music.

Conference  
Listening Machine

CPEurope

Peter Gregson will talk about the artistic concept and background of The Listening Machine and look at the compositional techniques that informed the aesthetic of a "digitally native" piece of music.

Speaker: Peter Gregson. Born in Edinburgh in 1987, Peter Gregson is a cellist and composer. Recently, he has premiered works by composers including Tod Machover, Daniel Bjarnason, Joby Talbot, Gabriel Prokofiev, Max Richter, Jóhann Jóhannsson, Steve Reich, Martin Suckling, Milton Mermikides, Howard Goodall, John Metcalfe, Scott Walker, and Sally Beamish. He also collaborates with many of the world’s leading technologists, including Microsoft Labs, UnitedVisualArtists, Reactify and the MIT Media Lab. 

Conference 
TheListeningMachine 

Daniel Jones CP Europe

The Listening Machine is a 6-month-long piece of generative music which translates the dynamics of 500 UK Twitter users into sound, with a huge array of recordings made with Britten Sinfonia. Sentiment, topic and conversation rate affect different facets of the piece, which is ever-evolving. This talk gives a technical overview of the piece and the process of turning speech patterns into music, from natural language processing to algorithmic composition. 

Speaker: Daniel Jones (UK) is a doctoral researcher at Goldsmiths, University of London, exploring the relationships between complexity, creativity and social dynamics. This manifests itself in both scientific and artistic output: he has published work on music theory, creativity, systems biology and artificial life, and exhibits his digital work internationally, harnessing algorithmic processes to create self-generating artworks. Recent works include The Listening Machine (with Peter Gregson, 2012); Variable 4 (with James Bulley, 2011), an outdoor sound installation which transforms live weather conditions into musical patterns; Maelstrom (with James Bulley, 2012), which uses audio material from media-publishing websites as a distributed, virtual orchestra; Horizontal Transmission (2011), a digital simulation of bacterial communication mechanisms; and AtomSwarm (2006—2009), a musical performance system based upon swarm dynamics. Daniel co-ordinated the technical infrastructure for The Fragmented Orchestra, winner of the prestigious PRSF New Music Award 2008. His audio development work for Papa Sangre and The Nightjar was nominated for two BAFTAS, including ‘Audio Achievement’.  

Conference
Help! We need sound!

Nela Brown

Gone are the days when sound designer’s job was to use analogue tape machines to play sound effects of dogs barking, footsteps and door knock alongside music score delivered by a pit orchestra during a live theatre performance. Nowadays, a person working with sound could be having a conference call with a team of architects, designers and programmers working out technical requirements for an interactive sound installation for a children’s garden, whilst uploading the latest remix of music composed for an experimental 12-hour overnight performance onto a server so it can be downloaded and used by the theatre company on tour in Brazil, all from the comfort of their home studio. The sound effect of dogs barking might be the same but the clients, collaborators, deadlines, ways of experiencing and interacting with sound as well as modes of delivery are vastly different. Through examples of her varied sound portfolio, Nela Brown will talk about why working with sound is still the ‘best job in the world’ and what sort of skills you will need once you leave the safe environment of academia and throw yourself into the world of sound freelancing! 

Speaker: Nela Brown is a Croatian sound artist, musician, composer and sound designer. In recent years her sound work has travelled across Canada, Italy, Brazil, Spain, Czech Republic, England, Scotland and the US as part of theatre plays, dance performances, electro acoustic compositions, short films, documentaries and interactive installations. She is currently working on her PhD at Queen Mary, University of London; exhibiting interactive artworks, delivering workshops and talking at conferences as leader of G.Hack (art & technology lab for women focusing on sharing knowledge and developing interactive media projects through collaboration with other universities, arts organizations and industry partners) and WISE@QMUL (women in science and engineering society); hacking into toys and musical instruments at Music Hack Days and collaborating with design research lab Stromatolite on a variety of projects.

Panel Discussion
The Future of Music is Social

Music is now all about recommendations and sharing. The success of many services like Spotify and SoundCloud shows how we are moving forward to a new era where owning music as an individual is less important than playing it together as a community. Even artists are embracing those services as a new way to interact with their fans.

In this panel, we will discuss this development and look into how users, artists, labels and advertisers can all be part of and benefit from this social music experience.

Barbara Hallama

Barbara Hallama (aka BarbNerdy) is a Berlin-based digital doyenne: a networker, trailblazer, early adopter, future-seeker, talent scout and DJ all rolled into one very neat and clever package. She is the archetypal digital gal with a penchant for seeking the newest, the coolest, and the most awesome out off what the Web and iOS have to offer. When it comes to the world of electronic entertainment, especially music online, she will be the first to get her hands on that killer service, app or gadget. You name it, she's alpha-, beta- and user-tested it to the max and back again. With her motto "Sharing Means Caring", you can look forward to some well-though-out insights from her at Campus Party Europe in Berlin.

Karlheinz Brandenburg

Prof. Dr. Karlheinz Brandenburg has been a driving force behind some of today’s most innovative digital audio technology, notably the MP3 and MPEG audio standards. He is acclaimed for pioneering work in digital audio coding. He is professor at the Institute for Media Technology at Ilmenau University of Technology and director of the Fraunhofer Institute for Digital Media Technology IDMT in Ilmenau, Germany.

 

Ben Fields

Ben Fields leads Musicmetric's data science team in an attempt to wrangle some sanity into the Internet’s vast supply of horribly formed music data. He has a PhD from the Intelligent Sound and Music Systems group in the Computing Department at Goldsmith University of London. His work there focused on merging social and acoustic similarity spaces to drive playlist creation and related user-facing systems. He is an expert on metadata, structured data, the semantic web and recommendation systems. In his spare time, he is a co-chair of the annual international Workshop On Music Recommendation And Discovery, has given an Ignite London talk about beer styles, occasionally DJs, is an accredited beer judge and homebrews beer. He thinks bios in the third person are weird but figures that’s how they’re meant to be written.

Stephan Baumann

Stephan Baumann heads the Competence Center Computational Culture (C4) at the German Research Center for AI in Kaiserslautern and Berlin (DFKI). He is currently engaged in research operations working on the cutting-edge in the Social Web. He did a PhD on Artificial Listening Systems at DFKI and IRCAM/Paris. In parallel he co-founded the sonicson GmbH - a startup for music recommendation engines- which was sold to Bertelsmann in 2004. His current research interests are in algorithm design for Social Network Analysis, RealityGames, Semantic Music Recommenders and the Post-Digital/Neo-Analog world. He is a musician and music lover by heart. He still performs live and buys frequently the hot shit on CD.
http://www.dfki.de/~baumann
http://mecallemand.de

Peter Kirn

Peter Kirn is a composer, digital artist, and journalist, born in Kentucky and now based in Berlin. As the founder of createdigitalmusic.com and createdigitalmotion.com, he covers the intersection of technology with music creation and visual interaction and performance for an international audience of musicians and visualists. He has also contributed to Popular Science, Macworld, Keyboard, Wax Poetics, and DE:BUG, and recently edited the book "The Evolution of Electronic Dance Music" from Backbeat (Hal Leonard). His own work runs the gamut from experimental audiovisual performance to sound installation to mobile apps, and has been presented at venues including FEED Soundspace, LEAP Gallery, Stereoluxe (Nantes, FR), Frequency Festival (Lincoln, UK), and LPM (Rome, IT). He is a PhD candidate at the City University of New York.

Workshop - Workshop 2 area
Create Your Own Electronic Instrument with Arduino and PureData

The Arduino is a popular word in the communities that bridge art, music and technology, but what is it and what can you do with it? In this workshop attendees will learn how to use the popular open source hardware platform to create a musical interface of their own design. A mixture of high and low technology will make up prototype MIDI controllers that generate music with PureData, an open source music development environment.

While no experience with building electronics is needed, it is recommended that participants have some programming experience with any language and bring own computers.

Becky Stewart

Speakers: Becky Stewart is a co-founder of Codasign, an interactive arts technology studio based in East London. She completed her PhD in acoustics and spatial audio with the Centre for Digital Music at Queen Mary, University of London in 2010 and is now interested in combining signal processing with physical computing. She currently is exploring how non-traditional physical interfaces like hand-knit objects can be used to create and explore music. Martin has most recently worked with Google building advanced mobile web-applications.

Adam StarkAdam Stark is co-founder of the London-based interactive arts technology studio Codasign. He completed his PhD on using intelligent digital technologies to create new forms of interaction in live music performances and art installations, completed at the Centre for Digital Music at Queen Mary, University of London. He is currently working with musicians and artists to get these technologies on stage and into rehearsal rooms.

Conference
Making interactive albums via mobile apps / Are apps the future format for albums?

yuri Letov

Smartphones are now the primary way in which we consume music on-the-go. But these devices can do so much more than simply stream music from the Internet or play songs from their own internal storage. In this talk, Yuli will explore the ever growing world of agile music, its characteristics, and the role of the music app as a new format for albums, including live examples of generative, interactive and reactive music, from RjDj Inception the App, to Björk's Biophilia.

Speaker: Yuli Levtov is director of Reactify, a mobile app and sound installation production company dedicated to exploring new ways of experiencing and consuming music. Having worked at the pioneering 'reactive music' app company, RjDj, he continues to work on new and innovative ways to perform and interact with sound. Reactify builds groundbreaking mobile apps for labels and artists, interactive music installations for events and festivals, custom musical instruments for the stage, and much more. Specialising in interactive, generative and reactive music systems, Reactify is constantly looking to push the boundaries of modern music consumption and creation.

Workshop - Workshop 2 area
Playing the Invisible: Imagining Music, Visually, with Free Tools

Peter Kirn

What would a circle sound like? A grid? A cube, bouncing around in space? What if you could draw and sketch music as easily as doodling? What if music were a game?

The wonderful thing about music is that you can feel it, but not see it. Making that invisible thing visible has challenged musicians for centuries. The score is one solution - time in blocks from left to right, pitch lined up vertically - but with computers, we can do more.

Using free and open source tools, friendly to non-programmers, we'll dream up some new ways to transform sound and musical structure. We'll try out some simple, easily-modified examples (built with Processing and Pd), allowing non-coders to fiddle with interactive musical structures. You'll leave with some of the tools to make your own interactive music for games, art, or just for fun. We'll also take a quick look at some of the experiments in music and visuals over the years, from crazy architectural installations to Sesame Street.

What to bring: A computer running OS X, Windows, or Linux. Pen and paper (for some doodling).

Peter Kirn is a composer, digital artist, and journalist, born in Kentucky and now based in Berlin. As the founder of createdigitalmusic.com and createdigitalmotion.com, he covers the intersection of technology with music creation and visual interaction and performance for an international audience of musicians and visualists. He has also contributed to Popular Science, Macworld, Keyboard, Wax Poetics, and DE:BUG, and recently edited the book "The Evolution of Electronic Dance Music" from Backbeat (Hal Leonard). His own work runs the gamut from experimental audiovisual performance to sound installation to mobile apps, and has been presented at venues including FEED Soundspace, LEAP Gallery, Stereoluxe (Nantes, FR), Frequency Festival (Lincoln, UK), and LPM (Rome, IT). He is a PhD candidate at the City University of New York.

Conference
Record label: Please copy this record to all of your friends

Crhistian Villum

This is the story of how Uhrlaut/Urlyd Records and female artist Tone made an international music platform out of experimenting with CC-licenses and seeking new ways for labels and artists within open file sharing. Printing on the back of their records 'Please copy this record to all of your friends', Danish maverick label Uhrlaut/Urlyd Records pioneered as the first label in the world to use a CC-license on music releases while being backed by an official collecting society, the Danish KODA. Though inviting music fans to copy and re-distribute the freely offered music files while also distributing vinyl's and cd's for sale in retail outlets seemed absurd for most people, it quickly turned out to be quite a business model. Label manager Christian Villum will present insight into this music venture from 2008 and up until now - as well as present other non-traditional and DIY-based initiatives taken that helped pave the way for the success.

Speaker: Christian Villum is an independent and self-employed bootstrapper, web activist and entrepreneur with media, arts, web, open culture and technology - and spends his time developing a myriad of different projects. He is, among other, the co-founder and co-executive director of hackspace/art-venue Platform4, promotes digital freedom as lead of Creative Commons Denmark and runs maverick electronic music label Uhrlaut Records. He holds a master degree in Culture, Communication & Globalization from Aalborg University and have previously lived, worked and studied in Berlin, New York and Chicago.

Conference
In Pursuit of the Perfect Pairing - Music, Recommender Systems, and Beer

Ben Fields

Recommendation is deeply embedded in the Web-based music services. These services employ many techniques, from curated-playlists (The Hype Machine) to content-similarity (The Echo Nest) to collaborative filtering (Last.fm). But can music recommendation techniques tell us what we should be drinking before a concert? Or where to drink it? Or what tasty beverage best matches Doom Metal? In this talk I’ll be briefly surveying music recommendation techniques, focusing on personalisation and content-based recommendation. I’ll then introduce a dataset of beer descriptions and ratings. We’ll apply the earlier techniques (some with slight modifications) to this rather more intoxicating domain, via a case study. When it’s all over, everyone should know the answer to that most important of questions: What beer will go best with this tune?

*All code used in this talk will be available online under an OSS licence.

Speaker: Ben Fields leads Musicmetric's data science team in an attempt to wrangle some sanity into the Internet’s vast supply of horribly formed music data. He has a PhD from the Intelligent Sound and Music Systems group in the Computing Department at Goldsmith University of London. His work there focused on merging social and acoustic similarity spaces to drive playlist creation and related user-facing systems. He is an expert on metadata, structured data, the semantic web and recommendation systems. In his spare time, he is a co-chair of the annual international Workshop On Music Recommendation And Discovery, has given an Ignite London talk about beer styles, occasionally DJs, is an accredited beer judge and homebrews beer. He thinks bios in the third person are weird but figures that’s how they’re meant to be written.

Conference
Reality Jockey (UK) on next-generation web audio, HTML5 audio & pure data

The open-source project Cynical is presented. This project is a Native Client (NaCl) library for the Chrome webbrowser enabling the use of the Pure Data (Pd) audio programming language on the web. The library allows messages to be sent back and forth between the Javascript- and Pd-domains, in addition to allowing static assets (such as audio files) to be read, written, and even exported. Cynical is presented as an alternative to HTML5 audio, which is pooly supported and arguably suffers from an already outdated API. The graphical programming interface of Pd, along with its vibrant user community and extensive library collections, make for an ideal audio programming environment, particularly well-suited to bringing next-generation audio interaction to the web and beyond.

Martin Roth

Speaker: Martin Roth is the CTO of Reality Jockey, Ltd. directing the technical development of the only reactive music platform in the world. Martin holds a PhD from Cornell University in the areas of mobile wireless ad-hoc networks and emergent systems, having developed new biologically-inspired routing algorithms exhibiting the robust and distributed properties seen in social insect colonies. He has worked as a post-doctoral researcher at Deutsche Telekom Laboratories studying mathematical models of emergent behaviour in computer networks, and led a team dedicated to investigating the merits of delay tolerant human networks. Martin has most recently worked with Google building advanced mobile web-applications.

Sebastien PiquemalSpeaker: Sébastien Piquemal, born in France in 1986, is a computer engineer, musician and sound designer based in Helsinki. He has been working as a web developer at Futurice Ltd. since 2010, mostly developing Futurice's internal services and IT infrastructure.  Sébastien is also studying sound design in Helsinki Media Lab, and on his free-time, developing open-source projects such as WebPd - a Javascript library for running Pure Data patches on the web. 

Conference
Vocal illusions – Voice Synthesis and Transformation technologies

Jordi Janer

In this talk Voctro Labs, spin-off company of the MTG-UPF, will introduce two voice processing technologies that are result of more than 10 years of research in this area. KALEIVOICECOPE (Voice Transformation) is a real-time audio effect that allows manipulating the characteristics of an input voice with high realism. Users can obtain high-quality voice transformations (e.g. gender change, age modification) or fiction voices (monster, robot, alien, etc.).

Virtual singers have become popular in Japan thanks to the success of Yamaha's VOCALOID singing voice synthesizer. We will give some insights behind the creation of a virtual singer and introduce BRUNO and CLARA, the first VOCALOID virtual singers in Spanish.

Speaker: Jordi Janer works as a researcher at the Music Technology Group of the Universitat Pompeu Fabra in Barcelona and is co-founder of Voctro Labs. In the field of audio processing, his research interests relate to singing voice analysis and real-time interaction. Current research projects broaden also to source separation and soundscape modeling in virtual environments. Graduated in Electronic Engineering (2000), he worked as DSP engineer at Creamware GmbH, (Germany, 2000-2003), designing and developing audio effects and virtual synthesizers. Joining later the UPF, he obtained the PhD degree in 2008. As a visiting researcher, he stayed at McGill University (Canada, 2005) and at Northwestern University (USA, 2009).

Conference
The Musical Avatar: Visualizing Your Musical Preferences

M Sordo

The Musical Avatar, an iconic representation of one's musical taste, is generated from a set of preferred music tracks provided by the user based on an audio analysis tool. Different acoustic features are computed for each track, which will then be used to build and train machine learning models to learn music semantic information such as musical genres, mood and instrumentation. These semantic descriptors will then be summarized and mapped to the visual domain to create a cartoon-like profile of a user.

Speaker: Mohamed Sordo is a post-doctoral researcher at the Music Technology Group of the Universitat Pompeu Fabra in Barcelona (Spain). He obtained his MSc and PhD in Information Technologies, Communication and Audiovisual Media from the Universitat Pompeu Fabra in 2007 and 2012, respectively. He has published over 20 scientific papers, served as a reviewer and program committee of several international conferences and journals, and collaborated in different European-funded research projects. He is also an active member of music hackathons, having been co-organizer of the three Music Hack Day editions in Barcelona. His main interests are music text/web mining, information retrieval, machine learning and software engineering.

Sign-up
341727
New editions
Blog