Aggregator
Broadcast Actions
Broadcast Applications
Construction Permit for new FM Translator, El Cajon, California
Pleadings
Actions
Letter: Helping Sisters High School With a Three-Hop Marti
The author is general manager of KNRL/KNLX in Oregon.
Paul, regarding the article “How WOGO Helped a Wisconsin School Graduate Seniors”:
I don’t know if you want any more graduation stories but I’ll give you ours.
It began when Pastor Jerry Kaping of Wellhouse Church in Sisters, Ore., called the station and wanted us to broadcast a drive-in church service for Easter, which we did as churches were closed.
A scene from the graduation event. Photos courtesy Principal Joe HosangSince the schools are closed due to the pandemic, the Sisters High School was looking for a way to honor their graduating seniors. Principal Joe Hosang called Pastor Jerry and asked if he thought the Christian station would broadcast their drive-in graduation. And Pastor Jerry generously committed his church to pay for the broadcast.
Graduates and families drove to the Sisters Rodeo grounds, where vehicles were parked, spaced appropriately. They could then turn to KNLX 104.9 and listen to the ceremony while watching seated in their vehicle.
The ceremony started at 6:15 p.m. with graduates singing songs that were pre-recorded. Valedictorian speeches also were pre-recorded. It culminated with seniors walking across the platform, appropriately spaced and wearing masks.
Family members cheer from a social distance.I’m not sure how many times we played “Pomp and Circumstance” but it was more than several. We concluded around 8:30 p.m.
I’ll say just a little about the technical aspects.
In this day and age of digital I suspect many of the younger broadcasters do not have a clue about the man Marti or the equipment which bears his name. But drive-in church services and drive-in graduation will not work well using streaming devices due to the delay even where an internet connection exists.
In our case this was a three-hop Marti radio link due to terrain (see photo at bottom). One shot to a mountain, a second shot to our main transmitter site and then a shot to the studio. In all, over 60 miles. And it sounded very good.
Yes this was a break in format, but it was something we could do for the kids and the community, and that is what broadcasting is all about … right?
The graduation dais is at center. The antenna feeding the radio station is faintly visible at far right above the bleachers.
The post Letter: Helping Sisters High School With a Three-Hop Marti appeared first on Radio World.
C-Band Auction on Track as Court Denies Sat Ops’ Stay
All plans for the C-Band auction will remain on schedule for now, as the U.S. Court of Appeals for the D.C. Circuit denied a motion to stay the auction by a group of satellite operators.
The motion was officially filed under PSSI Global Services LLC, but was also supported by ABS Global Ltd., Empresa Argentina de Soluciones Satelitas S.A., Hispamar Satélites S.A and Hispasat S.A. They argued that the FCC initiated a chain of events — starting with the election by space station operators to relocate from the C-Band on an accelerated schedules — that would harm them by “benefitting competing space station operators that are eligible for relocation and accelerated relocation payments and depriving them of spectrum access rights without compensation.” In addition, they said the FCC did not have the authority to modify their spectrum rights, gave out to much money in accelerated payments and arbitrarily excluded them from getting those payments.
[Read: C-Band Spectrum to Be Cleared on Accelerated Timeline]
The court ruled that the “[a]pellants have not satisfied the stringent requirements for a stay pending appeal.” This means that the C-Band auction will currently continue as planned until the court hears the challenge on its merits and gives a judgement at that time. The court has asked both parties to submit a briefing by June 29.
“Today’s ruling is great news for American consumers and U.S. leadership in 5G,” said FCC Chairman Ajit Pai. “I am very pleased that the D.C. Circuit rejected this attempt by small satellite operators with no U.S. operations in the C-Band to delay our efforts to repurpose critical midband spectrum. The FCC will continue to defend our order on merits, and I look forward to our C-Band auction beginning on Dec. 8.”
The FCC had denied a similar petition to delay the start of the auction by the same group of international satellite operators two weeks ago, according to Radio World’s sister publication Multichannel News.
The C-Band auction will see the transition of current C-Band operators move from the top portion of the spectrum as the FCC frees up 280 MHz for 5G. More information is available on TV Technology’s C-Band hub page.
The post C-Band Auction on Track as Court Denies Sat Ops’ Stay appeared first on Radio World.
Cloud-Based Automation Is a Reality; Now What?
The author of this commentary is VP of operations at DJB Radio Software Inc. This commentary is excerpted from the Radio World ebook “Trends in Automation.”
Virtualization. Cloud. Untethered Radio.
A couple of years ago I was invited to give a chat at my local AES chapter about remote broadcasts. As a lifelong radio guy I have stories aplenty (as most of us do), and the AES folk were fascinated by my tales of “guerilla engineering.”
On this particular occasion I gave a humorous history of radio remotes starting from the days of literally bringing the radio station to the remote site via a cargo van (or horse-drawn carriage) to today’s more rational events. These might include a small mixer, a couple of mics and a laptop or two, but are still firmly rooted at a table and plugged into a wall.
I then got all “what if” and I started talking about the radio remote of the future. I envisioned the radio host as a one-man band, going from place to place in a shopping mall with nothing more than a tablet strapped to their arm and a headset mic (Bluetooth, of course) on his or her head. I raved like a lunatic about cloud-based this and virtualized that with AES67 to deliver audio and AES70 managing control protocols. No wires or other obsolete shackles to hold our fearless host back — no broken folding table and threadbare chairs — just untethered freedom!
Little did I know my seemingly far-fetched Roddenberry-esque model would start coming to life in short order but it would also become a model for brick-and-mortar radio stations — not just remotes.
Virtualization is here. Cloud is here. The question is — how do we make it work?
LITTLE C, BIG C
Adam RobinsonIn 2018 I took on my current position with my lifelong friend Ron Paley at his second automation venture, DJB Radio Software. Among the challenges presented was to come up with a cloud model for the newly minted DJB Zone radio automation platform.
No problem! We’ll go get some space at AWS, spin up a cloud server and off we go. Right? Well … partly.
If all we want to do is run an automation system in the cloud, DJB Zone, or any of the popular automation platforms, can accomplish the task by simply using the cloud to house data or to run the software virtually on a cloud-based server. An HTML interface or third-party remote access software can get you to the dance, so to speak, and virtual sound drivers can send audio back to your studio or direct to your transmitter site. Let’s call that model “Little C” cloud.
Expectations are high among the decision makers in the industry that we can further rationalize operations by employing this wonderfully cost-effective place called “the cloud” to replace expensive brick and mortar studios. We’ll call that model “Big C” cloud and it is a complex beast.
SHOWING BACKBONE
If what we need is something that resembles the traditional radio model of mics and phones and multiple audio sources and codecs with a host (or hosts) in multiple locations all contributing to one broadcast without so much as a single physical fader, we’ve got quite the hill to climb. Getting automation in and out of the cloud is one thing, but what about the backbone?
First and foremost, there’s the issue of reliable internet connections — even the most robust fiber pipe suffers from downtime. Next, we have to tackle multipoint latency not only in audio but in LIO controls. And then there’s the issue of a virtualized, cloud-based mixing console that can handle inputs from all over the place and sync all of this disparate audio.
“It works for the streaming services — why not for traditional radio?” asks the most vocal member of the peanut gallery.
For starters, radio has a very different business model — it is not an on-demand service, nor is it entirely “canned” content. It also has a fickle audience — for generations now, radio listeners have been trained to be impatient. With that in mind, I generally respond to our vocal friend with the following — if it takes a few extra seconds for Apple or Spotify or Pandora to buffer, the average listener happily sits there watching the little wheel or hourglass go around. If a radio station disappears for a few seconds, that same listener will hit seek and move on to the next available frequency that IS playing something.
Live. Local. Immediate. The three hallmarks of radio since the dawn of the golden age. Lose those and we may just lose radio as we know it. This is the challenge facing not only the software companies but the hardware manufacturers too.
“Little C” cloud-based automation is a reality — there are some rough corners to smooth out yet, but we’re getting there. It’s the challenges of “Big C” that must be overcome before we can truly virtualize and “untether” radio. In the meantime, we can happily enjoy the many benefits of virtualizing radio automation systems in a central TOC or a cloud platform, saving money and increasing synergies among markets. Let’s invest those reclaimed resources in coming up with a new model for radio that will see it into its second century.
Adam Robinson is a 25-year radio veteran who has worked on both sides of the mic. An early adopter of radio automation and AoIP systems, he is now VP operations for DJB Radio Software. Contact him at adam@djbradio.com.
The post Cloud-Based Automation Is a Reality; Now What? appeared first on Radio World.
Should Translators Originate Content? FCC Is Taking Comments
Should FM translators in the United States be allowed to originate some programming content? The FCC is asking for public comment on these questions, prompted by a request from two dozen radio companies.
It’s an idea that seems to have been prompted by a separate unrelated proposal about using boosters for geo-targeting; but if adopted it could mean a proliferation of yet more programming choices on the FM dial, much of it on translators that were obtained relatively recently by AM stations as part of the AM revitalization effort.
As we reported in May, a group of licensees under the joint name “Broadcasters for Limited Program Origination” told the FCC in a filing that “to serve the public interest with increased program diversity,” both FM boosters and translators should be allowed to originate programming for up to 80 hours a week. The 24 licensees own 108 full-service stations and 85 FM translators.
[Read Radio World’s original article about this proposal.]
Their attorney, John Garziglia of Womble Bond Dickinson, notes that the FCC now has released a public notice putting this idea out for industry comment.
The petition argues that if the FCC considers allowing FM boosters to originate limited programming content to provide zoned programming to a primary station’s service area — as has been proposed separately by GeoBroadcast Solutions — the same opportunity for limited program origination should be given to translators.
“The Broadcasters for Limited Program Origination seek a uniform FCC rule change for both FM boosters and FM translators to allow each to originate programming content provided that the primary station is retransmitted for no fewer than 40 hours in any calendar week,” Garziglia wrote in a summary to industry journalists.
“The Broadcasters for Limited Program Origination observe that some radio stations may choose to broadcast different localized advertisements. Other stations may broadcast localized city council meetings for two or more communities in their coverage areas. Some broadcasters may determine what serves a particular station’s listeners are multiple localized high school sports games. Or, another broadcaster in a diverse area may broadcast two different kinds of ethnic entertainment programming at certain times of the day.”
The companies argue that this change would go along with the commission’s encouragement of diverse programming content; but also that if boosters get a “regulatory easing” on content choice, so should translators.
“Also, because the FCC’s new FM translator interference rules have re-defined the coverage contours of FM stations, the Broadcasters for Limited Program Origination advocate that extended coverage contours out to the greater of the 45 dBμ contour, or a 25-mile radius from the FM translator transmitter site, should now apply to what is regarded as a fill-in station for the purposes of the FM translator rules,” Garziglia wrote.
And the group wants the FCC to change its FM translator rules to give four-letter call signs with the suffix “-FX” for FM translator stations that originate limited programming content, presumably to help market these content sources as separate stations. The current rules give translators more clunkier and call signs like W250BC and K237FR.
They noted in their original request that they “take no position as to whether the GeoBroadcast Solutions technical proposal … is wise as a radio listener reception matter. Such concurrent broadcasting of different content on the same frequency within the same service area may be an interference disaster.” Rather, they wrote, their goal is “to provide diverse programming over FM translator and booster radio facilities without the FCC’s heavy thumb restricting their choice of content.”
[Related: “Large Groups Raise a Caution Flag on Geo-Targeting”]
The broadcasters in the filing are Miller Communications/Kaskaskia Broadcasting; the Cromwell Group of Illinois and Hancock Communications; TBE LLC; SSR Communications; Port Broadcasting; the Fingerlakes Radio Group and Chadwick Bay Broadcasting; Blackbelt Broadcasting; Mazur LLC; The Original Company, Old Northwest Broadcasting and The Innovation Center; Virden Broadcasting; Lovcom Inc.; Genesee Media Corp.; Viper Communications; Mountain Top Media; Eastern Shore Radio; and MTN Broadcasting and Eldora Broadcasting.
Among familiar broadcaster names on the proposal are Randal Miller, Bud Walters, Terry Barber, Mark Lange, Matt Wesolowski and Cindy May Johnson.
The commission is asking that comments about RM No. 11858 be submitted via its comment system by July 23.
The post Should Translators Originate Content? FCC Is Taking Comments appeared first on Radio World.
Super Hi-Fi Queues up Streaming Music
Don’t blame Zack Zalon for all of the job losses at iHeartMedia earlier this year.
Fingers began pointing Zalon’s way after the radio broadcaster implemented a technological shift to artificial intelligence to help its radio station clusters operate more efficiently. Subsequently, a large number of iHeart employees were let go.
Zalon is CEO and co-founder of Super Hi-Fi, an AI company that designs digital music solutions for the iHeartRadio streaming platform. That relationship drew scrutiny from some radio industry observers who speculated the broadcast giant’s infrastructure overhaul included the use of Super Hi-Fi’s MagicStitch technology, an “audio stitching” program capable of creating “human-like” segues between online music tracks in playlists.
“We are dealing only with the iHeartRadio streaming people,” Zalon said. “We are working on the innovation side, which is streaming-based. Not terrestrial radio.”
iHeartMedia’s massive reorganization included the creation of AI-enabled Centers of Excellence, according to a company press release at the time. The broadcaster pointed to the improvement of its technology backbone, in addition to strategic technology and platform acquisitions like Jelli, a programmatic ad platform: RadioJar, a cloud audio playout company; and Stuff Media, a podcasting firm.
Super Hi-Fi was not mentioned by name in the iHeartMedia announcement.
BRIDGING A GAP
“We are working with iHeart on a very deep level to bridge that gap between broadcast and digital. There is a lot of roadmap stuff to improve the audio experience,” Zalon told Radio World.
The radio business “seems like an underdog right now,” he said, “when actually radio is still the number one form of music consumption in America. Radio has a lot of great experiences and resources.”
However, it seems “broadcasters just don’t know how to view streaming and whether it is a threat or not. And streaming media people think radio is old technology and not all that valuable,” he said.
Zalon says broadcasters and media companies have been reaching out to him during the COVID-19 pandemic in search of opportunities to add efficiencies to technical operations via Super Hi-Fi’s technology platform.
“Broadcasters are searching for a way forward that brings together broadcast and digital and drives revenue and loyalty. Broadcasters have been talking to us about inserting our technology into the broadcast stack for the purposes of efficiency. And when I say efficiency I mean using the resources they could free up for the artistry of radio. Focusing on the curation, the production and the human voice, which makes radio so effective,” Zalon said.
Broadcasters are realizing, Zalon said, that some broadcast technology could be more efficient if AI assisted them with things like placement decisions in their automation.
“Programmers are just lining things up in automation systems really, and that isn’t necessary anymore when AI can do it for you automatically. AI can make a lot of presentation decisions,” Zalon said.
“But AI isn’t a job killer. There hasn’t been a single service we have integrated into, iHeart included, that hasn’t utilized more human resources after figuring this out. When streaming audio works you need more people to curate music. You need more people to work with advertisers to inject commercials in the system. And produce those commercials.”
[Related: “Is Artificial Intelligence Friend or Foe?”]
AI is not a replacement for people yet, he said, but an “enabler of human capabilities that has never existed before.” But Zalon does envision a day when computer-generated voices sound as real as a human voice and pop up on iHeartRadio streams.
WHERE THE ENERGY IS
Zalon said Super Hi-Fi’s primary focus remains enabling new audio streaming experiences and bridging the gap between what he thinks are “silos of broadcast radio and digital” that haven’t been bridged.
“We want to enhance experiences by taking the concepts of broadcast and engineering solutions. Steaming audio is where it’s going. Streaming media is fantastic. The sound quality is incredible. The personalization options are amazing. That is where all the energy is moving toward. We are interested in bridging the silos. Radio services will ultimately all be streaming when 5G is in the car.
“And when 5G is in the car what will be the point of connecting to a broadcast tower? Streaming is a technology not a technique. As technology evolves we think the technique should evolve as well. I think broadcasters are beginning to recognize that,” he said.
Zalon’s background is steeped in digital music experience, including building one of the earliest consumer digital music platforms, Radio Free Virgin, which was part of Richard Branson’s Virgin Group. At other points he has helped launch and design digital music services for CBS Radio, Sony Music, AOL Radio, Muve Music and Yahoo Launchcast.
Zalon handles the strategic direction of Super Hi-Fi, which he launched in 2018 with co-founder and Chief Technology Officer Brendon Cassidy. The AI company, based in Los Angeles, works with a variety of companies and has about 35 employees.
Digital music streaming’s lack of flow and production quality has always been an issue, Zalon said, with too many dead gaps in the music and a lack of emotion.
Super Hi-Fi and iHeartRadio announced its partnership in 2018 with a goal of creating intelligent audio transitions in the iHeartRadio app. MagicStitch is also deployed by Peloton and the recently launched Sonos Radio. And it just announced a partnership with Octave Group, which provides retail music entertainment in locations like Starbucks.
The patented MagicStitch system adds things like transitions, sonic leveling and gapless playback to the iHeartRadio digital stream, Zalon said.
“Radio is our inspiration. And I think one day radio owners will realize they hold the keys to digital listening experiences. They just haven’t activated them correctly. They have not seen them as assets but instead as liabilities. We see that totally the other way around,” Zalon said.
“Radio broadcasters have the tools and experience to create these incredible professional-sounding broadcast streams to make the digital music experience exciting. They have the tools to make the digital media experience stickier and more valuable than what is in the marketplace right now.”
PERSONALIZED AND SCALABLE
Super Hi-Fi has developed a technology that can deliver that vision, Zalon said, via MagicStitch and its ability to be more than just a playlist with long gaps of silence.
The AI system consists of a layer of cloud services, APIs and components/reference implementations for major mobile and desktop environments, according to a press release. The results are personalized and scalable listening experiences (see sidebar at end of this article).
MagicStitch, to borrow a broadcast term, takes the dead air out of audio streaming, Zalon said during a recent demonstration of the digital platform. The technology “stitches” together transitions between songs as if done by a real human DJ.
“Our research is focused on understanding audio content to the same depth as a human. When we were building CBS Radio’s digital platform, we all thought the gaps in the music were terrible. Pandora was around at the time. They all sounded the same if you close your eyes. We thought what if we were to smartly use radio techniques to stitch songs together to improve the experience. Then we started thinking about segues and how many of different combinations there could be and how to that figure out algorithmically.
“Well, we soon figured out it wasn’t possible at that time. The number of segue calculations were literally in the trillions. So went on building these music services but they still didn’t sound quite right.”
[Related: Read the Radio World ebook “AI Comes to Radio”]
Zalon said he and Dawson realized it was impossible to write enough algorithms to solve the segue problem and instead began to focus on training artificial intelligence to do what radio DJs do. “For the AI to be smart enough to have the dexterity of a trained human DJ,” he said.
“Our belief is that it’s the techniques of radio, the music transitions, the voice branding and all of those other elements of radio that makes the digital product stand out.”
Music services like Spotify and Apple Music use a “cross-fade” function to help cut down on the gaps between tracks, Zalon says, but the problem is the platforms still don’t recognize the subtleness of the human touch.
A MagicStitch transition from Super Hi-Fi’s testing application.“It’s not all mechanical. MagicStitch in real time calculates what it thinks is the perfect segue for any two tracks you might play back to back in a playlist. And uniquely for those two songs. MagicStitch reaches back to our cloud server and gets back the proper instruction and then aligns it down to the correct thousandth of a second. It considers rhythmic elements and lets the previous song play out the right way. Whatever it takes to make it sound radio worthy,” Zalon said.
However, MagicStitch does more than segues, Zalon says; it can also brand the digital stream much like radio does with the human voice.
“Music transition is the core of what we do. The next step was training MagicStitch to understand branding elements and the human voice with that same level of depth. It uses radio techniques like interview snippets that don’t step all over the music in an inappropriate way to build a personality into a streaming service,” he said. “Now we can assign the branding component based on listener preferences and interject voice them like broadcast radio does.”
MagicStitch can layer multiple elements into the stream, such as audio liners, commercials and branding messages, he said.
“It’s capable of delivering a seamless layered stream experience to a smart speaker,” Zalon said.
And the AI system gets smarter each time it performs a song segue, Zalon said. “The platform has a feedback loop so it is digesting a lot of machine learning advances all the time and understanding content better. So as the data grows and the more calculations you add MagicStitch can represent in creative ways,” Zalon said. “It essentially gets smarter with each audio transition.”
MagicStitch currently completes a billion streaming song transitions across multiple services each month, according to Super Hi-Fi data.
Comment on this or any story to radioworld@futurenet.com with “Letter to the Editor” in the subject field.
>>>
Sidebar:
More From Zack Zalon
We asked Zalon further questions about how MagicStitch software works and about the company’s technology in general.
Radio World: What physical signal parameters are being measured and assessed about a particular music track to define the way that Super Hi-Fi handles that track?
Zack Zalon: For starters, I’ll share that we are gathering a tremendous amount of data on the audio files. Yes, we are collecting countless features, but we are also gathering some very unique attributes from our machine learning services, as well as from over 1 billion data points from commercial usage that we collect every month.
The amount of data that we collect on each file is actually larger (in storage terms) than the source file itself. There are literally millions of data points that we collect, and then the trick is to train the AI to actually use these data points.
RW: Exactly how is the “human touch” of a segue developed for each track?
Zalon: For us, the key is not data per se, it is the idea of context. Yes, we need data, and a lot of it. But the data for us is a means to an end. What we’re working toward is a perfect contextual understanding of the audio file so we can automatically make really artful, human-like decisions about how to handle that content.
How does a quiet song transition into another quiet song? How does that same song properly transition into a higher-energy song? Does having a female singer make a difference, does it change the way a listener will react to a specific song transition? Should it be different if there is an advertisement that comes afterward? Should there be talking over the song?
These are the questions that we have been tackling, and then working backward to modify the service to ensure that it understands — comprehends — the content with enough depth to be able to make the right choices, all day every day.
RW: Does the Super Hi-Fi algorithm analyze different segments of an audio track differently?
Zalon: More specifically, we are collecting all of the data points you asked about earlier, though we use LUFS as a measure, not LKFS. But we also have designed and developed dozens of proprietary analysis tools and associated proprietary data points to measure. Existing tools weren’t giving us the broad-based view of the content that we needed for the AI to work properly. Please note that we aren’t just looking at music files, we are also analyzing spoken word, sound effects, advertising (of numerous types), sonic logos, etc.
So using traditional music analysis techniques wouldn’t be sufficient. Also to be specific, we analyze the entire file, not just any one section, and we analyze the difference of each data point so we can build a richer base of understanding regarding that file, how it changes over time, and how it relates to the other files that we may be stitching around it.
RW: Does the AI system do any audio correction or modification of the tracks?
Zalon: We do not do any audio correction or modification. In fact, we don’t actually deliver any files. Our customers deliver the files, what we do is to send them a set of presentation instructions in real time that they use to create their experiences. Everything for us is about placement, as though it is being mixed by a human DJ at a broadcast radio station. But it is actually AI making all of the calculations and sending those to our customers as they are requested.
RW: What really differentiates your AI from a cloud-based automation solution? There seem to be automation systems that can do the same right now. They have been stitching audio, liners and segues for decades. Is MagicStitching simply automation for the cloud?
Zalon: Today’s radio automation systems have some of these capabilities, like an Auto Jock, but they are very different from Super Hi-Fi. These radio system do a great job of automating for a linear terrestrial broadcast, using specific human annotation points — such as segue points — added in on a very select number of content files, be they music, voice liners, or advertising.
Super Hi-Fi is built for the scale and breadth of today’s largest digital streaming services, where the number of content options are virtually limitless, and the number of personal experiences are just as broad. With our AI, the data is all analyzed and annotated with no human intervention, so our system understands an incredibly wide array of music features on literally tens of millions of content files. Each decision — whether it be a song segue, a voice liner, a podcast snippet, or an advertisement — is calculated in real time based on each specific set of content options and for each unique listener. This provides enormous flexibility and control, and allows large streaming music services to start delivering radio-like listening experiences without limiting the kind of unique, personalized experiences that consumers have come to expect.
So, in a way, the outputs of the experiences are somewhat similar. We are very influenced by how radio uses production techniques to create differentiation and to build amazing branded services. We’re just coming at it from a very different direction and for use in a very different way.
The best example of this is in a comparison of scale: On a broadcast radio station, you can expect there could be perhaps 10 “transition” moments per hour (segues, liners, etc.), which adds up to around 7,200 per month. Super Hi- Fi is currently generating over 1 billion transitions per month for our customers. That’s the equivalent of us powering 138,000 broadcast radio stations, 24/7, all in real time. Today’s radio automation systems are fantastic at what they do, but they just aren’t built for the same use case.
RW: Are you collaborating at all with RCS, a company owned by iHeartMedia? RCS has a cloud solution for radio automation.
Zalon: We have a ton of respect for RCS, they’re definitely top of their field. But again they are focused on radio automation, and that’s not what we do. We are enabling unique, radio-like experiences for digital music streaming services, and so our technologies are very different from one another. That said, there’s no reason why we couldn’t collaborate with them; in some ways I imagine we’re each very complimentary to what the other does.
RW: You talk a lot about creating efficiencies with MagicStitch. What specifically do you add to the “broadcast stack”?
Zalon: When we talk of efficiencies, we are generally referring to the breadth of streaming music services. Imagine the difficulty of having to manually tag all 51 million music files that exist on today’s services. Imagine having to program the transition technology to handle hundreds of millions of listeners, and trillions of possible content combinations. It’s just not achievable without the kind of efficiencies that our AI provides. Now, I imagine that there are efficiencies available to radio broadcasters as well.
As an example I can state with confidence that we’re gathering vastly more data on each piece of content than any human would be able to assess. So that’s one specific example. But as to where we add value to the broadcast stack, I would guess that it would be different for each radio service, based specifically on their individual goals.
RW: If Super Hi-Fi AI can make placement and presentation decisions, what specific decisions does it make? Could the AI replace the need for radio broadcasters to schedule music and promos, or even commercials?
Zalon: Super Hi-Fi makes presentation and production decisions, but it doesn’t program music. I would guess that a radio broadcaster could use some automated programming technology, but humans seem to do a much better job of that. Our technology takes what has already been programmed and automates the presentation so it sounds amazing, with all of the segues perfectly designed for just that set of content, without human intervention.
RW: That said, talk of efficiencies typically means jobs losses in any business field. Where can Super Hi-Fi AI save broadcasters money? Can you give examples?
Zalon: I really can’t yet, as we don’t have any of those specific examples to give. Right now our customers are using Super Hi-Fi for next-generation streaming services, and in each of those cases our customers added employees. In other words they are using the efficiencies of our platform to grow listeners and revenue, not to drive cost savings.
Now, I imagine radio broadcasters could use our tools to save time and money, eliminating the need for anyone to add data to content or to align content in their radio automation services. But I think Super Hi-Fi is a more attractive option for broadcasters who want to use what they are already amazing at — incredible radio listening experiences — and to apply that to the next generation of listening. In other words, to take what they’re already doing but to do it across a new generation of listening platforms for a new generation of listeners. That’s where Super Hi-Fi really starts adding huge value.
RW: And those computer-generated voices you mention. When are those coming? Years or months? And how close are you to a solution?
Zalon: Great question. We aren’t a text-to-speech company, though we definitely keep our eye on the space. Amazon is doing some amazing things with their Polly service, and there are some very cool products that are in the early stages of commercial deployment. But let’s not forget that Bill Gates said in 1995 that the computer voice services would be amazing in five years, but here we are 25 years later and it still sounds computer generated. So it wouldn’t surprise me if it took another 25 years.
The post Super Hi-Fi Queues up Streaming Music appeared first on Radio World.
AM Notes From the Field
The author is vice president of business development for Orban Labs.
We all know that engineers have way too much on their plates and may not always have the time to check things thoroughly, especially with equipment that may, at first glance, appear to be operating correctly.
After spending a fair amount of time at multiple AM transmitter facilities recently, I have some observations on things that really should be checked more often.
MODULATION MONITORS
Out of the dozen or so AM sites I have been to since March of 2019, I haven’t found a single modulation monitor that was accurate.
For the sites I visited in this report, I carried a Belar AMMA-2 that I had calibrated by Belar just prior to the start of my visits (thanks Belar!) and I am certain it’s accurate. That being said, there are a few issues with mod monitors that I have found:
— Non-MDCL/AMC capable modulation monitors should not be used on transmitter
Not all modulation monitors indicate properly with MDCL or IBOC enabled.s running MDCL/AMC. Typically, I found these types of mod monitors were reading upwards of 40% higher than the actual modulation when compared to my AMMA-2. In some instances, the AMMA-2 showed 75% positive modulation on a transmitter running MDCL where the onsite modulation monitor was showing 115% positive.
— Modulation monitors out of calibration or broken. If your modulation monitor is going on 10+ years old and hasn’t been back for a calibration, odds are it’s not going to be accurate. I highly recommend sending it in periodically to make sure it’s operating correctly and in calibration.
— Incorrect setup, or making measurements off air. Most AM modulation monitors need to have the RF input set correctly. There is usually a “Cal” or “RF” level adjust for this. If this isn’t correct, your readings are going to be meaningless. While you are at it, you might want to check to make sure that the sample port output of the transmitter from which you are feeding the mod monitor is good. A scope will go a long way to check that the sample port is operating correctly. And, please, “just say no” to making AM modulation measurements off air … it’s not going to be remotely accurate.
TRANSMITTER PROBLEMS
I “broke” a couple of 1990s-era 50 kW AM transmitters during my tour. Both of those weren’t happy to begin with, and my attempts to get them to make full power at 125% positive modulation were met with a number of PA faults.
Once the PA problems were sorted out at these sites, I found that both of these were happier at 120% positive mod than 125% positive modulation. It might be that your aging transmitter simply won’t handle the higher positive modulation levels anymore.
Additionally, I found a couple of transmitters that had the audio polarity reversed … and that will also lead to a lot of unhappiness with trying to get positive modulation over 100%. I found a backup transmitter with its audio polarity reversed too. The main transmitter had the correct polarity.
If the transmitter with which I was working on my tour had a digital audio input available, I always used it. I typically found that peak control was within 0.5% using digital inputs.
And that brings me to LF tilt on analog transmitter inputs: a scope and a square wave generator will tell you quickly if you have an LF tilt issue on the transmitter. Put a 50 Hz square wave into the transmitter and take a look at the output of the transmitter on a scope. Adjust the transmitter LF EQ in the processor to minimize the tilt. I usually found I needed +3 dB at 3 Hz to flatten the square wave on older transmitters.
You also might run into modulation overshoots (bounce). Bounce is typically a nonlinear problem caused by a sagging or resonant transmitter power supply found in older transmitters. Newer transmitters fed via a digital audio input typically do not have this issue.
If you are running MDCL, your transmitter may have issues of which you are unaware. It’s a good idea to periodically disable your MDCL and check your power output. It’s possible that your MDCL operation could be masking issues that may be coming into play.
AUDIO ISSUES
The biggest problem I found with audio was that the audio processor’s input levels were set incorrectly and as a result the processor’s AGC wasn’t operating optimally. This causes the AGC gating to misbehave, resulting in “pumping” and “breathing.” Check your specific audio processor manual for proper AGC setup.
In almost all cases, you want the AGC to be operating in its “mid-range” with nominal program levels. On current Orban processors, that is about 10 dB of AGC.
Proper input levels and AGC setup are key elements in good station sound.Also check the gate level settings. At one station I ran into a processor that was showing that the gate was on all the time and I thought the processor was broken. Tech Support found that the gate level had been set 6 dB higher than where it should have been set. It had been misadjusted in an attempt to compensate for improper input level. The fix was to adjust the input sensitivity to drive the AGC to mid-range (which was an 8 dB increase) and reset the AGC gate to its nominal -30 dB setting. Recalling a factory preset would have also reset the gate to its normal level. It was a revelation to hear how much better the station sounded once the AGC and gate had been set up correctly.
Make sure you’re using the correct processing settings. Over time, the formats of many stations have changed, car radios have changed and the AM band’s noise floor has increased. What worked for processing when the station was running “Urban” 20 years ago won’t work for today’s talk format — you’re going to need a different processing preset on the processor.
I was recently working with an AM that just didn’t sound all that great, and we decided to start fresh with a factory stock preset. We used the “Music Medium” on their processor and added 2 dB of “brilliance” and the station sounded spectacular.
As part of our testing and adjustments, we listened to radios in both my rental car and the CE’s car, while the PD was driving around town in his vehicle. Sometimes a fresh start goes a long ways to making things sound better.
AM transmitters often sound subtly different from each other because many have levels of nonlinear distortion (THD and IM) that are large enough to be audible. So in terms of processing adjustments, one size does not fit all, and you may have to back off processing (mainly clipping) if the transmitter has higher levels of distortion than a modern transmitter.
The Nautel NX3, for example, specifies 0.8% THD and 0.5% SMPTE IM at 99% negative modulation. For its Flexiva 3D, GatesAir specifies typical THD of 0.3% and 0.4% SMPTE IM at 95% modulation.
Additionally, older receivers with diode envelope detectors produce significantly increased distortion when negative modulation exceeds 90%. This is not true of modern DSP-based receivers, however.
With proper modulation and processing, even older AM facilities can sound really good.Take a critical listen to your station. Do the announcers sound “crunchy” with elongated, raspy sibilance? If so, it’s probably beating your Time Spent Listening (TSL) numbers to death. If you turn down the clipping, it will help considerably. Also, consider buying a newer processor. An early 1990s AM processor set to “Chernobyl” to try and get over today’s high noise floor environment isn’t going to cut it. And with all due respect, old analog processors just can’t be competitive any longer in most markets.
And then there is the PPM enhancer which many have set way too high — I call that setting “max rock crusher.” At that level, those tend to sound like a steel bowl being scraped with a whisk. A bit of a deft touch is in order to not sound like a Mixmaster with a bad bearing.
If you’re an engineer having problems sorting out your processing or arguing with the PD over proper processing settings, I’d be happy to personally chat with you or your PD. Email me at processing@orban.com.
My opinion is that with proper modulation and processing, AM stations can sound great and can run more efficiently (which will save your station some money!). It wouldn’t hurt to run a quick reality check the next time you’re at the transmitter or adjusting the processing.
Comment on this or any story. Email rweetech@gmail.com.
The post AM Notes From the Field appeared first on Radio World.
Letter: Don’t Shrug Off Benefits of AM Band in Digital
The author is a broadcast consultant based in Hamersley, West Australia.
I wish to reply to Frank Karkota’s list of comments in his article “No to Digital AM.”
- You compared a crystal set and a digital radio. A crystal set consists of an aerial, a tuned circuit for station selection consisting of an inductor and a capacitor, and diode demodulator, perhaps another capacitor and a pair of headphones. By comparison a software-designed radio consists of a much smaller aerial, a filter containing an inductor, and the capacitor is on board on a DSP chip specifically designed for digital radio. The DSP chip is doing all the tasks required of digital to produce an audio signal, just like the diode in the crystal set. I suppose you could attach a pair of tuning switches and plug a pair of headphones into the analog output, although the high-impedance headphones of the past are now not available. The main difference between the two is that the digital radio has stereo sound, and the distance to the transmitter can be considerably greater for everyday reception.
- As for the availability of parts, have you gone to a store and tried to buy a new variable tuning capacitor or a germanium diode? The silicon types used in power supplies are commonly available but unsuitable. SiliconLabs is in Austin, Texas, and has 1,500 employees. The parts suppliers who make the chips are not only in China but South Korea, Taiwan and India. The receiver complexity is in one DSP designed radio chip, which replaces the germanium diode. So if it fails you replace the chip as you would have if the diode had failed. They are both “black boxes.” This article shows how to make a modern AM/FM/DAB+ radio. The signal processing in DAB+ is very similar to DRM except for the tuning bands.
- Infotainment systems in new cars use DSP so it is easy to add digital reception of DRM, DAB+ and HD Radio (with a license fee). This ability is in the radio DSPs already. DAB+ and DRM radios are tuned by station name, not frequency. There is already a DRM radio that contains a Bluetooth hotspot so the radio is tuned by a mobile phone and a box containing the receiver is connected to the antenna and puts out USB or FM stereo. Hybrid radio is pushing the sending of the station logo sent to the radio via mobile broadband, which is not necessary in DAB+. DRM can already do this. The HD Radio receiver will switch to mobile broadband instead of AM or FM when the digital signal contains too many errors.
- DRM sound quality has been upgraded through the use of a new compression algorithm called xHE AAC. Listen to this on a good pair of stereo headphones. The Dream software has only recently been able to decompress xHE-AAC signals.
- As the signal quality deteriorates, the AM signal becomes noisy but the stereo DRM signal continues until the AM is unlistenable, then it will start muting on errors. It is also good at rejecting adjacent-channel interference. As you point out, it removes the phasing effects caused by multiple reflections from the ionosphere due to error correction.
- You can keep your car for 10 years if you wish and buy an adaptor to connect between the aerial and the existing car radio. You may need a mobile phone or a clip on a dash-mounted controller to tell the adaptor what program to listen to. Norway now has no AM/FM broadcasts by major networks, only DAB+. Ratings have returned to normal since conversion..
I would like to add the following comments of my own:
- In Europe, AM has been disappearing, so much that many radios are either DAB+ digital and FM, or FM only.
- I would like to suggest that in the Americas, that the virtually deserted TV Channels 2 to 6 could be used for DRM. There are enough channels available for all AM and FM broadcasters; and because there are no overlapping channels, high power can be used to give larger coverage areas than FM.
- AM started broadcasting 100 years ago and is very inefficient compared to DRM, where the electricity consumption is reduced by >67 % because it has no carrier.
Comment on this or any article. Email radioworld@futurenet.com.
[Related: “BBC’s Fry: Digital on AM Is the Way Forward”]
The post Letter: Don’t Shrug Off Benefits of AM Band in Digital appeared first on Radio World.
Actions
Broadcast Applications
Media Bureau Announces Federal Register Publication of Report and Order in Amendments of Parts 73 and 74 to Improve the Low Power FM Radio Service Technical Rules
Applications
Pleadings
Broadcast Actions
Lawo Adds Remote Console Operation
Console maker Lawo has released the Mix Kitchen, a console remote control system.
The Mix Kitchen uses the Mackie HUI control surface protocol to provide the ability to remotely control Lawo mc2 console systems via any Mackie HUI-compatible control surface. Besides physical fader control, Mix Kitchen provides access to other things such as processing, bus control, presets, etc. It is both Windows- and Mac-compatible.
[Check Out More Products at Radio World’s Products Section]
Lawo Senior Product Manager Audio Production Christian Struck said, “The Mix Kitchen setup works almost out of the box: no additional Lawo hardware, retrofits or upgrades are required. Audio engineers can work with an inexpensive fader panel that supports Mackie HUI, e.g.Icon Platform X, Behringer X-Touch, their laptop, a mouse and a tablet.”
Info: www.lawo.com
The post Lawo Adds Remote Console Operation appeared first on Radio World.