Photo Credit: iHeartMedia Portland

How early is too early for some holiday cheer? iHeartMedia Portland’s K103 is already playing 24/7 Christmas tunes starting November 10.

iHeartMedia Portland announced that K103 is again Portland’s Christmas Station, as it has been since 2001, providing 24/7 holiday music for listeners in the Portland/Vancouver/Salem area. Even ahead of Thanksgiving, the station will broadcast around-the-clock holiday tunes by Mariah Carey, Bing Crosby, Nat King Cole, and many more. 

“K103 has been kicking off the holiday season in Portland for over 20 years. Listeners continue to request our mix of Christmas music as their holiday soundtrack every year, and we’re humbled that they have made us a part of their holiday traditions,” says Michael La Crosse, Vice President of Programming for iHeartMedia Portland.

Radio personalities Stacey & Mike, Jana, and Kristina, will continue to host the station’s programming. Fans can tune in on 103.3 FM or the station’s website at and

iHeartMedia remains the leading media outlet in the Portland market with multiple platforms, including broadcast stations, live events, data, and digital businesses. iHeartMedia Portland owns and operates KKCW-FM, KLTH-FM, KFBW-FM, KKRZ-FM, KXJM-FM, KKRZ-HD2, KPOJ-AM, and KEX-AM.

The leading audio media company in America, iHeartMedia, reaches over 90% of Americans each month. The company’s broadcast radio assets alone have more consumer reach in the US than any other media outlet — twice the reach of the next largest broadcast radio company — and over four times the ad-enabled breadth of the largest digital-only audio services. 

iHeart is the largest podcast publisher, according to data from Podtrac, with more downloads than the next-largest two podcast publishers combined. The company has seven times more followers on social media among audio players than other top audio media brands. It is the only fully integrated audio ad-tech platform across broadcast, streaming, and podcasts. The company continues to leverage its strong audience connection and consumer reach to build new platforms, products, and services.

Source link

With a reported 100,000 tracks being uploaded to Spotify and other music services. That’s led to streamers each housing 100 million tracks, and as that total balloons, artists and labels are overcrowded, fans are overwhelmed, and streaming services struggle to keep up.

by Mark Mulligan of MIDiA and the Music Industry Blog

When governments plan to introduce controversial new policies, they prepare the ground in advance (dropping hints in speeches, privately briefing journalists, etc.), so that by the time the new policy finally arrives, it does not feel quite so controversial. A similar process is currently playing out in the music business. The biggest major label executives are starting to seed a narrative into the marketplace about the potentially corrosive effect that the rapidly-growing long-tail of music and creators is having on consumers’ music-streaming experiences. Of course, it also happens to dent major label market share too, but the issue is not quite as clear cut as it might first appear.

There are three main industry constituents that are at risk from the fattening of the long tail:

  1. Major labels and their artists
  2. Consumers
  3. Long-tail creators

Let’s look at each of those in turn:

1 – Major labels

The first on the list is the most obvious, and also the easiest, to demonstrate. Over the course of the five years from 2016 to 2021, the majors grew recorded music revenue by 71%, which is impressive enough, except that artists direct (i.e., artists who distribute without record labels) grew revenues by 318% over the same period. Consequently, artists direct increased global market share from 2.3% to 5.3% while majors went from 68.8% to 65.5%. Meanwhile, the top 10 and top 100 tracks continue to represent an ever smaller share of all streaming. The very least that can be said is that majors and their artists have collectively grown more slowly than long-tail creators, and at the most, the case could be made that long-tail creators have eaten into major’s growth.

2 – Consumers

This one is far harder to make a definitive case either for or against. Consumers tend not to categorise music anywhere near as precisely as the music business. For example, only a third of consumers say they mainly listen to older music, despite industry stats showing that catalogue consumption dominates. Most consumers do not consider music to be ‘old’ as soon as the music business does. So, imagine how difficult it would be for consumers to delineate ‘what is long tail?’ They may say in surveys ‘music isn’t as good as it used to be’, but they could equally be referring to majors’ music as the long tail. So we are in the realms of measuring second-order effects (are consumers disengaging from streaming? Not yet, but they might) and of making logical assumptions. If consumers consistently hear poorer quality music, then it is reasonable to assume that their satisfaction would decline. However, DSP algorithms push music that matches users’ tastes, and there is so much high quality in the long tail that there is no particular reason to assume that more long-tail consumption should inherently equate to an increase in consumption of poorer-quality music. And do not forget, consumers have demonstrated plenty of tolerance for ‘average’ music in mood and activity playlists.

3 – Long-tail creators

It may sound oxymoronic to suggest that long-tail creators could be hurt by the rise of the long tail. But, as Will Page put it, the rise of the long tail means that “there are more mouths to feed”. The fractionalised nature of streaming royalties means that the more long-tail creators there are, the lower per-stream counts there are and, even more important, the harder it is to cut through. The irony is that it is easier to make the case that the long tail is eating itself than it is to establish causality between its rise and majors’ loss of share.

Divide and conquer

Of course, the missing constituency is the DSPs themselves, but they do not warrant a place here, because they are the ones with the power to scale up or down long-tail consumption via their algorithms. It serves DSPs to have listening fragment to a degree as it lessens the share and, therefore, the power of any individual label. But if DSPs ever thought they were pushing too far, then they would rein in the algorithms.

Where next?

So where does all this leave us? In the ‘do nothing’ scenario, listening continues to splinter, majors lose more share, long-tail creators find it harder to cut through and earn while consumers may (or may not) see any meaningful change to their listening experiences. In short, the head loses out, as does the long tail, while the market further consolidates around the ‘body’ of streaming catalogue (which, by the way, the majors are already key players in and could easily ramp up their focus – as WMG is already doing). 

The ‘do something’ options fall into two key groups:

  1. Gate / limit consumer access to catalogue
  2. Gate / limit creator access to royalties

There are many ways to achieve the first (preventing long-tail music getting onto DSP catalogues; lowering long-tail priority in algorithms; creating a separate tier of catalogue; deprioritising / blocking it from search and discovery, etc.). All of this risks looking very much like the establishment trying to prevent the next generation of creator and industry breaking through. That is without even considering the moral dilemmas of choosing who is ‘in’ and who is ‘out’.

Option two, however, could be more altruistic than it looks. For an enthusiast hobbyist with a few hundred streams, royalties are going to be little more than a novelty. But for a hard-working, self-releasing singer / songwriter with tens of thousands of streams, the hundreds of dollars are already important. Let’s consider that there was a pay-out threshold, where 1,000 annual streams are the point at which royalties are paid, with all the royalties associated with the sub-1,000 stream artists being distributed between all other artists. Suddenly, those slightly more established long-tail artists can earn more income. 

None of these options are without challenges and moral dilemmas. But the direction of travel appears to be towards something being ‘done’ about the long tail. If that really does end up having to happen, then let us at least try to ensure that the changes benefit long-tail artists too, not just the superstars.

Source link

By offloading CPU loads onto the Graphical Processing Unit, GPU Audio is placing a powerful and underexploited parallel processor at the center of music production. The shift could have massive implications for the emerging music metaverse, AI, resource-intensive production environments, collaboration platforms, and bandwidth-demanding concert live streams. The company might provide the critical backbone required to support a myriad of fast-emerging music industry sub-categories.

Keeping up with the breakneck music industry of 2022 is a difficult task. Once upon a time, billions of streams were dazzling enough – now the industry is bursting with avatar bands, metaverse livestreams, increasingly-sophisticated DAW or plugin options, emerging collaborative platforms, NFT surges and flops, and resource-intensive, immersive audio experiences.

It’s a dizzying explosion — especially when it comes to the topic of powering it all. Concepts like complex spatial audio environments and real-time jamming sound great on paper, but is there enough processing power for all of it?

Graphic cards — or GPUs — are underexploited processing powerhouses capable of more than just graphics, and can take a good beating with massive amounts of workload. They are the backbone of the modern AI industry, the infrastructure of metaverses, and now—the future of audio?

As advancements in tech surpass imagination, computing demands for audio dutifully follow. Upgraded tech like spatial audio, neural networks, machine-learning-based plugins, and heavy virtual analog plugins now highlight a massive need for music developers. A new ‘standard of processing’ is needed to go where tech is going.

The pro-audio industry has tried numerous prospective solutions, from SHARC to FPGAs. But no company has successfully enabled the underutilized power of graphic cards. The tech had been thoroughly idealized yet never effectively executed due to the fundamental differences between GPU architecture and the sequential nature of audio processing.

Now, Los Angeles-based GPU Audio is tapping into the remarkable power of built-in GPUs for audio production and attempting to change the face of a multitude of other industries.

Currently the only company able to fully exploit the processing capability of built-in GPUs, GPU Audio founders refer to themselves as ‘unlockers’ and ‘enablers’ of  ‘accelerated audio computing.’

Judging from initial user feedback, the company appears to be filling a real gap in the industry. Alexander “Sasha” Talashov, co-founder and co-CEO at GPU Audio, described the company as ‘a conductive layer that connects companies, complex tasks, and systems.’ Just recently, the company joined forces with DMN to broaden adoption and accelerate the growth of this powerful solution.

GPU Audio launched its Early Access plugin at NAMM, and it’s already being touted as a solution that brings parallel processing back to the forefront of music production. Post NAMM drop, GPU Audio’s pre-beta users increased 33-times this summer and currently stand at over 20,000 users — up from 600 in the first couple of days of June. Just weeks ago, they further released the “Beta Suite” featuring a growing toolset of audio plugins, beginning with classic, mainstay effects: Flanger, Phaser, and Chorus.

Talashov spoke about the company’s broader mission of establishing new standards for pro-audio, and enabling connections to other computing technology advancements. According to Talashov, a good example is the current VST3 standard. “The VST3 doesn’t provide any new features; it doesn’t connect us with anything. No new hardware, no new software, no new bridges to the rest of the world. So we began there — by connecting it to the GPU.”

Speaking about building this bridge with GPUs, Talashov added, “This is not only connecting platforms — it’s connecting industries.”

It’s no secret that processing power bottlenecking has always been the bane of the pro-audio industry.

Fiendish latency has destroyed the sanity of audio producers everywhere. Then there’s trouble rendering out STEMS, summing a group of mixes, or having to freeze or export and import tracks to save DSP power. But even an investment in expensive hardware fails to defeat bad renders, off-beat STEM processing, and choppy audio workflows in Dolby Atmos. Despite the considerable advances in Apple silicon, walk into an Atmos-certified studio, and you still find engineers using two or three computers to execute projects.

The economics of tapping into existing GPUs versus purchasing expensive hardware is grabbing user attention.

A bottleneck-free hardware setup can easily cost $4,000 or more when purchasing computing power for audio production. In comparison, a $900 hardware setup will run programs that usually require external acceleration hardware or expensive desktop-grade solutions. By enabling existing GPUs, users can fire up a neural GP, amplifiers, processing, and other essentials with huge performance implications.

gpu audio over CPU

Speaking about what GPU Audio is doing, fellow co-founder Jonathan Rowden said, “We greatly power up spatial audio tools. We speed up machine learning and AI tools – even the basics. Tools that use GPU Audio will turn your GPU into a DSP accelerator, saving your CPU headroom for other tasks.”

And what about Apple’s second generation M2 — the buzz that these chips are the ultimate, much-needed native hardware upgrade for track processing?

Speaking about these latest Macbook processors, Rowden appeared unconcerned. “People talk about M1 and M2 MacBook processors as though these eliminate the need for acceleration, but there are still limitations when ‘going native’ for consumer-grade systems. Apple silicon also has GPUs, and we’ll be enabling and unlocking those as well. We’re already working with Apple on solving this and the results are extremely promising. The feature will be available in Early Access in just a few weeks–or less.”

Already, the company has raised $6 million. With its current valuation, GPU Audio could be approaching its Series A by the Spring of 2023.

Talking about what’s coming within the next few months, Talashov said, “If you have a Windows laptop, you can download it today. Our first Beta Suite Bundle was made public on October 8th. It’s compatible on PCs with NVIDIA GPUs. AMD support is now in ‘Early Access’ and available on the PRO Driver based GPUs. Internally, we already have Mac OS support for M1 and M2 GPUs, along with AAX support for ProTools – and before long, users can test this in early access.”

As the system is developed, there’s also the promise of exciting implications for real-time collaboration. Cloud-based DSPs can provide real-time, non-destructive workflow opportunities that could change the game in myriad industries. Rowden elaborated on this, saying, “We operate on the grounds of core-level innovation. GPU Audio can make audio renders available instantly. We can get rid of the export button. Imagine how a simple feature can change the game of how companies design the future of workflows and the impact it will have on creative output – from such a fundamental level.”

GPU Audio is focused on the tech’s scalability and upgradability, and is acutely interested in collaborating with third parties to develop products. Speaking about this, Rowden clarified, “This is what we mean by bridging accelerated computing and pro-audio. This tech is a new standard that we believe can be used anywhere.”

With grand plans of tech applications in various industries, GPU Audio claims it will ‘power the future of audio from music to metaverse’ – and company execs are pretty convincing when they explain how.

From the accelerating computational needs of gamers and PC users to manufacturing industries running mammoth equipment for complicated computations, unlocking the potential of GPUs is what gets the ball rolling.

According to Rowden, “Anywhere a GPU is present, GPU Audio powered applications will be able to harvest its power.”

In the final weeks of October, the GPU hardware powerhouse AMD (Advanced Micro Devices) also invited GPU Audio as guests to the Adobe MAX conference in Los Angeles. Rowden called AMD one of their ‘strongest supporters,’ and added, “AMD wants to bring GPU-accelerated audio processing to their creative users – video and graphics creators. Senior leaders at AMD expressed that everything their creative user base needs is accelerated by GPUs, and it makes sense to expand this to audio.”

For AI and machine learning, GPU’s highly parallel nature will process and accelerate workloads.

Designing robotics, industrial equipment, and autonomous vehicles require data input from various sources and sensors, such as video equipment and audio sensors. New neural network models, techniques, and use cases appear rapidly. For speech and image recognition demands and other language processing, GPUs accelerate data ingestion and expedite the entire AI workflow. With the acute interest and support of leaders at multiple GPU hardware companies, GPU Audio is working on a backend and front-end solution for making AI-based audio applications faster and more powerful.

And what about web3 and the metaverse?

GPUs may also supply the enormous computing power needed by web3 developers, so the industry can tackle the massive amounts of channels and pathways that go into metaverse creations.

Rowden is clearly thinking big. “Because GPU Audio utilizes a source of compute that makes up a vast processing infrastructure – especially cloud-based – it stands as the most powerful potential resource for the future of cloud-based DSP. This has massive implications for metaverse and web3-focused projects,” he explained, adding, “GPU powered plugins will be easily deployed on the cloud, and it won’t stop there.”

GPU Audio aims to facilitate realistic immersive experiences in the metaverse by ‘enabling’ the tech that allows a network of cross-communicating and connected virtual worlds. It will also unlock advanced AI executions.

On this front, US-based NVIDIA, a world-leading designer and manufacturer of GPUs, chipsets, and other multimedia software has also taken an interest in what GPU Audio can bring to the table. NVIDIA has asked the company to implement more focused efforts on the development of AI advancement tools and metaverse applications.

NVIDIA is currently developing Omniverse, a collaboration platform for developing metaverses and related products and ecosystems. As that initiative takes root, NVIDIA is now exploring how GPU Audio processing can transform the ‘cost’ of audio processing bottlenecks into an advantage on their cloud.

This is just one example of the pioneering tech that GPU Audio says will change the world – as they continue to enable GPU solutions.

GPU Audio greatly encourages anyone interested to join their more than 20,000 users to participate in Beta and Early Access Testing. Beta addresses the production suite of plugins, while Early Access features a convolution reverb to address new features like Mac M1/2 support before it reaches beta.

Please contact Jonathan Rowden at [email protected] (or LinkedIn here) to learn more.

Source link

We recently had an insightful conversation with someone working at Spotify. We asked him the question; how many songs you should release as an artist? 

We got some inside information from someone working at Spotify on how many songs you should release. The person with whom we spoke is curating one of the most extensive Dance playlists on Spotify. If you want to succeed as an artist on Spotify, you need to know everything about the algorithm. Spotify uses a sophisticated algorithm to choose which songs would be allegeable for curated playlists or not.  

When you do not have a ton of releases or a big following on Spotify, the algorithm wouldn’t push your music to a bigger audience. However, you can work on the number of releases you put out. That’s what the algorithm actually favours, constant releases. So when you do not have a big following consistency is key.  The more frequently you release new material, the more reasons there will be for fans to go there and listen to your music. Together with having the right marketing plan with enough run-time to get the most out of your marketing efforts.

Some key numbers

When you’re just starting out, do six to eight releases per year. This is quite a reasonable amount of releases for you to achieve and will show some consistency. Doing 12 releases a year when you are starting out is just not a feasible number. It’s too much for you to produce high-quality songs and promote them properly. So that will mean a release every one and a half months to 2 months. 

Depending on the music you make there are some key points to take into account. If you make more pop music or music with more vocals it is a good idea to release less music trough out the year and focus on doing a proper marketing campaign. Around 6 releases a year would then be a sweet spot. When you make more underground music or instrumental music you should be aiming for at least eight and maybe consider doing two to four remixes. 

Albums are dead

We also asked him about releasing albums to the streaming service and he has a clear answer to that, no just don’t do it! It doesn’t make any sense nowadays. The only reason to release an album as an artist is if you want to have this creative journey and share that with your fans. If you want to have a piece of work that is a collective of songs that reflect a certain point and time in your life and share this with your fans. But only consider it when you already have a big and tight-knit fanbase. If you still want to do it you should release four or five songs of the album as a single throughout the year to have the desired outcome and reach. 

You could consider selling the album when the first single releases as merch or as a digital download for super-fans. This way you’ll make a lot more money from the music you made. After a year you just release it as an album or as singles so you have the right amount of content for your fans to be growing on Spotify. 

The most important thing to keep in mind is what feels good for you and makes you satisfied. If a song is not ready, don’t release it. There’s no sense in pushing a song without having something that is good enough. 

Our last quick tip for promoting your songs on Spotify playlists is; Try to reach out to Spotify if you know someone there; it’s like gold; it’s worth a ton! It’s really hard to reach these people because everyone wants to be on this playlist; even the big guys are begging for playlist spots. Attend networking events and conferences like ADE to meet the people behind the streaming platforms and get your name out there.

Create your own playlist and trade spots with other artists. Organically grow your playlist, make sure your fans follow it and maybe consider gate the link. So when you release some free music people need to follow your account or playlist on Spotify.

The rest is then completely up to chance; maybe someone at Spotify likes your song or maybe the algorithm likes your song and features it. You could also have a huge hit by just releasing one song once a year. Everything is possible, but these are the recommendations of someone who knows how the algorithm works.


 Have fun creating awesome music and don’t forget to submit your demo to us so we can help you get your music heard!