SnapStream Blog

Need a Volicon Replacement? Ask These Questions

December 16 2019 by Tina Nazerian

Developers               Loudness Graph_Dec_5.                          need-a-Volicon-replacement-ask-these-questions-question-mark-image                     

Is finding a replacement for your current broadcast monitoring and compliance solution on your to-do list? According to a recent survey we conducted, 73% of respondents are looking to do so by the end of 2020. Of those looking to replace their current broadcast monitoring and compliance solution by the end of 2020, 75% are specifically looking for a Volicon replacement (meaning, they’re using Volicon today). 

During your research, there are several questions you should ask yourself about features. (For tailored questions you should ask based on your organization type, read how three broadcast industry professionals—one at a cable company, the second at a local TV station group, and the third at an MVPD—would evaluate their next broadcast monitoring and compliance solution). 

 

1) Do I need loudness monitoring?    

Loudness Graph_Dec_5

    The loudness graph in SnapStream Monitoring & Compliance. 

In the United States, the CALM Act regulates the audio of TV commercials in relation to the TV program they’re accompanying. Having automated tools for finding loudness problems and being alerted whenever there’s an issue is immensely helpful. 

2) Do I need audio metering? 

image6

    The  Multiviewer in SnapStream Monitoring & Compliance—with audio meters turned on. 

You might need to monitor audio levels without listening to the audio. If so, make sure your new solution lets you quickly determine audio levels for multiple feeds at a glance.  Loudness Peaks Dec 10Loudness Graph_Dec_5

3) Do I need closed captioning monitoring? 

image7

    Clip export with burn-in of closed captioning in SnapStream Monitoring & Compliance. 

The FCC dictates that TV stations, cable and satellite providers, and program producers are responsible for closed captioning compliance. If your organization is found to be out of compliance, the fines can add up—the FCC considers each episode of a program with defective captions to be a separate violation. It’s vital to have a tool that can help you verify that your closed captions ran as they should have. 

 

4) Do I need to analyze ratings? 

image4

     The ratings display graph in SnapStream Monitoring & Compliance. 

If you get ratings data from Nielsen or other providers, you can import that data to help you visualize it and analyze ratings performance for your content. For example, you can compare how different channels perform over specific times and dates to gain additional insights, such as whether you might have gotten low ratings on one channel because the majority of your viewers were watching another channel during that time period. 

5) Do I need ratings audio watermark monitoring? 

image3-1

    An alert from SnapStream Monitoring & Compliance about a missing audio watermark. 

Audio watermarking is an important part of having accurate ratings data. If your organization uses ratings audio watermarks (such as Nielsen audio watermarks), it’s important to have a tool that can alert you if those watermarks are missing.

6) Do I need SCTE-35 message monitoring? 

image8

SCTE-35 message monitoring in SnapStream Monitoring & Compliance. 

If you work at an MVPD, you’ll want to be notified if there aren’t any commercial messages in the stream. If it gets to the point where a broadcaster sees an ad not run, the broadcaster will contact you saying they didn’t get an avail message for that ad. Then, you’ll need to be able to easily jump to the date and time in question to look for the splice_insert for that particular avail.

 

7) Do I need as-run log integration? 

image2

     As-run logs in SnapStream Monitoring & Compliance. 

If you need to prove to advertisers that their ads ran, it’s important to have a tool that lets you easily find specific ads in the as-run logs, create clips, and directly email advertisers those clips. 


With SnapStream Monitoring & Compliance, you can easily migrate your as-run and Nielsen ratings import configurations from Volicon. SnapStream Monitoring & Compliance is the official Volicon transition partner and  has solutions for loudness monitoring, audio metering, closed captioning monitoring, ratings monitoring, audio watermark monitoring, SCTE-35 monitoring, and as-run log integration.

Loudness Monitoring: Developer Q&A

November 27 2019 by Tina Nazerian

This is the first blog post in our series, "Behind the SnapStream Monitoring & Compliance Feature." 

"With SnapStream, instead of broadcast engineers having to manually look for the loudness problems, the problems will come and find them." — SnapStream developer Paul Place 

Developers               Loudness Graph_Dec_5Loudness Graph_Dec_5 .                                            The loudness graph in SnapStream Monitoring & Compliance.                                          

Key Takeaways

1) When they were building the loudness feature in SnapStream Monitoring & Compliance, developers Paul Place and Tim Parker had extensive conversations with Volicon users. 

2) During those conversations, they learned the major pain points Volicon users had with the product—such as having to manually look at the loudness graph daily.

3) In turn, they built a solution with an emphasis on exception based monitoring. 


When they were building the loudness feature in SnapStream Monitoring & Compliance, developers Paul Place and Tim Parker spoke to multiple prospective customers. They dug into what in Volicon worked for them, and what didn’t—so they could make loudness monitoring in our own product comprehensive and user-friendly. 

They recently discussed their journey developing the loudness feature. 

 

Developers

SnapStream developers Paul Place (left) and Tim Parker (right).       

SnapStream: What research went into building SnapStream Monitoring & Compliance’s loudness monitoring features? 

Tim Parker: We started with Volicon. We took a look at was in the Volicon UI, and that gave us a bunch of hints on where we needed to start our research. 

When you open up the Volicon UI, you see things like ITU-R BS.1770 mentioned, or ATSC mentioned. Once you start looking at one of those documents, you realize there's a chain of documents that fit together. For the United States, it starts with the CALM Act, which then points to ATSC A/85 RP, which then points to the ITU-R BS. 1770 reference. 

We spent time doing extensive research on loudness specifications and how loudness is computed. We did it this way because we didn't just want to follow what others had done without knowing how and why it worked. We wanted a deep knowledge of what loudness really is so we knew that the product we planned to build would work the way our customers expect.

Loudness Peaks Dec 10Loudness Graph_Dec_5Loudness Peaks Dec 10

You can easily look for loudness peaks in SnapStream Monitoring & Compliance's loudness graph. 

What insights did you gain from the conversations you had with Volicon customers? 

Paul Place: We focused a lot on how they were using Volicon. The Volicon UI is kind of confusing because there's a number of things they present that aren't in the standard. Were customers using short interval computations, for example? What are they, why do customers care? Volicon also has long interval integrated value computations. What are they, why do customers care? 

Interacting with customers and understanding their workflows helped us understand these features and pinpoint what was actually useful for them. 

For example, we put the short interval values into our product because it made sense to us—they allow the customer to do things like select the end of a program segment and figure out what the integrated loudness is for just that commercial. 

In other words, one of the biggest use cases for our customers is: is something too loud? The offender is usually a commercial. 

What happened with Volicon is that they sort of accrued features over time—some of them more useful than others. There's a lot there that we couldn't find anybody using.

Parker: It's an interesting ecosystem because there's basically two layers at play here. There's the layer of automation that ensures the loudness is normalized before it goes to the customer's set top box. 

And then we come in at the end of the chain, after it's been broadcast to the viewer. We're verifying that yes, this loudness is normalized. So when a viewer complains that something is loud, a broadcast engineer wants to analyze what went out to that viewer and see where it was loud. 

That helps the broadcast engineer identify what part of that chain—before it went out to the viewer—is not working properly. Broadcast engineers have devices and software that make sure that loudness is normalized, but sometimes they get out of spec or they stop functioning properly. We're the last step in making sure that everything is working properly for them.

Clip export with burn-in of loudness data in SnapStream Monitoring & Compliance.

What were some major pain points Volicon users had that you both addressed? 

Place: We started to see a pattern of these broadcast engineers being reactive. Volicon didn't really offer a good means of identifying program segments that were out of compliance. One broadcast engineer we spoke to would just scan and look at the peaks and valleys on the loudness graph. When he saw a peak, he would zoom in to see if there was a problem. 

He had to go through this very manual process daily. Every single morning, he got in and he had a checklist of things to do. One of them was to look at the loudness graph for any problems. 

We knew we could do better than that in SnapStream Monitoring & Compliance. We’re giving broadcast engineers automated tools for finding the loudness problems and alerting them. With SnapStream, instead of broadcast engineers having to manually look for the loudness problems, the problems will come and find them. 

Volicon had an alerting system, but it was difficult enough to use that nobody we talked to used it, at least for loudness monitoring and compliance. Volicon users said they got a lot of alerts that they had to sift through to find the things they cared about. That made it not useful. 

Loudness Reportloudness report

Loudness report in SnapStream Monitoring & Compliance. 

SnapStream Monitoring & Compliance has a loudness graph and clip export with the option to burn-in loudness data. It also generates loudness reports. Could you give some more details on each? 

Parker: I've seen screenshots of other loudness tools and they generally do just the report feature, or you get a spreadsheet with numbers essentially. 

I think the way we present the loudness graph in the UI makes it easy for users to interact with and scrub the data—versus having a spreadsheet, which is limited in what it presents. 

Place: I think the visual indicators are very useful. For example, looking for peaks on the loudness graph. Say you have a day’s worth of data. You can easily see if a part of your feed fell above the maximum loudness target, and then be able to drill down and learn more. 

We’ve put a lot of engineering effort into making the loudness graph usable and responsive. You can zoom in and zoom out, for example. 

Parker: The clip export with the option to burn-in loudness data gives broadcast engineers evidence they can send to someone—for instance, to a colleague, saying “Hey, you need to fix this. Here’s the data.” 

And the goal of the loudness reports is to help users close to loop, so to speak. It’s a way for them to present proof to the FCC or another external stakeholder. 


With SnapStream Monitoring & Compliance, you can monitor your feeds for regulatory compliance and advertising proof of performance. Our solution includes loudness monitoring, closed captioning verification, audio watermark detection, and more. SnapStream also offers tools for searching TV; sharing TV clips to Twitter, Facebook, and more; and sharing clips of live events to social media in real-time. 

3 Key Changes that Made Broadcast TV More Accessible in the U.S.

November 08 2019 by Tina Nazerian

Geoff Freed 2                                                                    

 

Key Takeaways

1) The "Television Decoder Circuitry Act of 1990" set the stage for modern accessibility, requiring TV hardware to support closed captions.

2) At the same time, a surge in the number of captioning agencies lowered prices and helped make closed captions ubiquitous.

3) Closed captioning requirements came to some online video through the "Twenty-First Century Communications and Video Accessibility Act (CVAA)" of 2010. 


 

In the 1980s, if you wanted to view closed captions on your TV, you had to buy a separate set-top box or telecaption decoder. Nowadays, however, you can simply press a button on your remote, and the closed captions will appear on your screen. 

That’s just one way broadcast TV has become more accessible to audiences. As broadcast technology has advanced throughout the years, laws have been put in place to make sure those changes are accessible to a variety of people. 

Geoff Freed has spent more than 30 years leading broadcast, web, and multimedia accessibility initiatives at WGBH in Boston. He’s the Director of Technology Projects and Web Media Standards at the National Center for Accessible Media (NCAM), which operates out of WGBH

Throughout his career, he’s seen changes that have made broadcast TV more accessible. Here are the three most major ones he’s seen. 


Television Decoder Circuitry Act of 1990

56b1e4ac670aec7c60c35c4a4a992b76

A telecaption decoder.

Freed says that the initial broadcasts of TV programs with closed captions occurred in March 1980. Those TV programs were: 

1) "Semi-Tough" (ABC Sunday Night Movie)

2) "Masterpiece Theater" (PBS) 

3) "Son of Flubber" (NBC's Wonderful World of Disney)

However, if viewers wanted to view captions on their TV screens, they had to go out and purchase a set-top box or telecaption decoder. Telecaption decoders weren’t cheap. Freed says they cost about $250 back then, or about $700 today. 

“If you were deaf and you wanted to watch TV, you not only had to buy a TV like everyone else, but you had to find another device so you could follow along with what's going on,” he says. 

The Television Decoder Circuitry Act of 1990, which went into effect July 1, 1993, changed that. The law states that if a TV receiver has a picture screen 13 inches or larger manufactured or imported for use in the US, it must have built-in decoder circuitry to display closed captions. The law also requires the FCC to ensure that as there are advancements in video technology, consumers continue to get closed captioning services. 

“With the enactment of the Television Decoder Circuitry Act of 1990, suddenly most TVs sold in the US could decode captions for no extra cost,” he says. 

 

A Surge in the Number of Captioning Agencies

WGBH logo

WGBH's logo. Captions were originally invented for broadcast TV at The Caption Center at WGBH. 

Freed says that one of the major shifts that came with the Television Decoder Circuitry Act of 1990 was an increase in captioning agencies. 

He explains that captions were originally invented for broadcast TV at The Caption Center at WGBH. WGBH began providing open captions, or captions that viewers see and cannot turn off, in 1971. Open captions were the way that TV programs were captioned until closed captions, which viewers can turn off, came along around 1980. 

The Caption Center at WGBH was the world’s first captioning agency. The FCC set up the National Captioning Institute (NCI) in 1980 as a way “to quell objections from networks like ABC and NBC who did not want to pay a PBS station—WGBH—to create captions for their programming,” Freed explains. Shortly after the NCI was established, a third one, called VITAC, came onto the scene. They’re still in business. 

Only a small percentage of TV programs had captions until about 1991. There were no laws at that time mandating that TV programs had to be captioned. The closed captioning audience was small (Freed notes that by about 1992, only about 400,000 set-top closed caption decoders had been sold) and captions were expensive to create and code. Producers “were reluctant to spend money on captions because the size of the audience was so small compared to the general viewership.” 

Yet, those three organizations—WGBH, NCI, and VITAC—still kept busy. And in the early 1990s, there was an increase in captioning agencies. While the primary cause for that increase was the passage of the Television Decoder Circuitry Act of 1990, another cause was the passage of the Telecommunications Act of 1996. That law, among other things, established new rules mandating closed captions for broadcast TV. 

“Suddenly in 1991, I can remember doing surveys of broadcasters and asking them, ‘who's doing your captioning?’ and I'd hear names I'd never heard of before,” Freed says.  

He explains that the impetus for the passage of closed captioning laws and rules was advocacy from the deaf and hard-of-hearing community. Viewers and numerous deaf and hard-of-hearing advocacy groups complained, and legislators eventually listened. 

“Captioning agencies, led in large part by The Caption Center at WGBH, fought very hard to get laws passed that mandated captions on TV, too,” he says. “As you might expect, most broadcasters fought back, citing high costs and the small audience as the main reasons they did not want to be forced to provide captions.” 

However, as captioning agencies increased, prices decreased. Freed says there was so much price competition that some captioning agencies shut down because they simply couldn’t afford to stay in business. 

“You often get what you pay for,” Freed says. “But these days it's typical for people to charge depending on the level of caption production that you choose. As a producer or program provider, you might pay a dollar a minute. In the early 1980s, mid 1980s you would pay, more or less depending on who was doing the work, $2500 per hour.” 

He adds that producers and broadcasters saw those lower prices and realized that “captions actually broadened” their audience to anyone who owned a TV. 

“Captions were also being shown to be useful to people who were not deaf or hard of hearing,” he says. “They were useful for teaching kids and adults how to read. They were also useful for teaching foreign languages to adults as well.” 

The surge of new captioning agencies in the early 1990s meant that captioning technology also advanced. Today, there are a number of “do it yourself” captioning tools that are mostly for the creation of online captions. Anybody who produces online videos can create their own captions with these tools, some of which are free (including the one NCAM makes).  

“Making these tools available makes it easy or sort of removes one more excuse for not providing captions for videos that are only distributed online,” Freed says. 

 

21st Century Communications & Video Accessibility Act (CVAA)

Office 1

Under the CVAA, when NBC puts "The Office" online, it must include closed captions, because the original program aired with closed captions.  

Broadcasters no longer solely distribute their content to television screens. They can now put their content in front of virtually anyone, anywhere via the internet. In October 2010, President Obama signed the Twenty-First Century Communications and Video Accessibility Act (CVAA) into law to ensure that twenty-first century technologies are accessible to everyone. 

“What the CVAA basically did was make a number of rulings about captions and other accessibility matters,” Freed explains. 

One thing the law stipulates is that if a broadcaster airs a program with captions on TV, and then wants to put it on the internet, the web version has to have captions as well. 

“If you take a captioned program from broadcast and you put it on the internet in whatever form, such as moving it to a YouTube channel or embedding it on your own page, those captions have to travel with the program,” Freed says. 

One exception to that rule is that a program or video created and distributed solely for the web doesn’t need to be captioned. 

“There are no regulations mandating captions for videos that are created solely for the web,” Freed notes. However, he thinks that captions for web-only videos will eventually become mandatory. 

Another thing the law states is that certain types of devices, such as Roku boxes and smart TVs with internet capabilities, have to be accessible to people who can’t see the screen. 

“You can buy Roku boxes these days and similar devices that will speak to you if you can't see the screen,” Freed says. 

The law also states that on-screen program guides must be accessible to those who can’t see. 

“If you turn on the capability, you'll be able to listen to the menus and listen to all of the programming grids for Netflix or Amazon or whatever and use them with your remote control, even if you can't see the screen,” Freed says. 

The law has made an “enormous difference” to people who can’t see who have decided to cut the cord and go with online distributed media. 

“Up until this rule was passed, anybody who was blind or visually impaired and wanted to watch TV alone or wanted to watch a program alone via a smart app or a Roku box was out of luck because there was no way to operate it non-visually.” 


With SnapStream Monitoring & Compliance, you can monitor your feeds for regulatory compliance and advertising proof of performance. Our solution includes closed captioning verification, loudness monitoring, audio watermark detection, and more. SnapStream also offers tools for searching TV; sharing TV clips to Twitter, Facebook, and more; and sharing clips of live events to social media in real-time. 

Q&A with Jim Bernier, Turner Broadcasting Engineering Veteran

September 26 2019 by Tina Nazerian

SnapStream Series: The Future of Broadcast Monitoring & Compliance 

 Jim Bernier Image                                                                                   

 

Key Takeaways

Jim Bernier's long career in transmission engineering spanned pivotal moments in media technology, such as: 

1) the transition from 24-hour logging VHS machines to digital recording 

2) the move from DTMF tones to SCTE messages for program insertion cues

3) the deployment of SMPTE 2110 workflows

 


When Jim Bernier started his career in the early 1980s, he used regular videotapes for spot airchecks. 

Throughout his 40-year career, he saw the broadcast monitoring and compliance space undergo many changes. He was most recently at Turner Broadcasting, where he was the Senior Director of Maintenance and Transmissions Engineering, Technology and Engineering, US NetOps. 

Bernier retired in August after 18 years at Turner. SnapStream interviewed him to learn more about the four decades he spent in broadcast monitoring and compliance, and how he saw the field evolve. You can read an excerpt from the conversation below, which has been edited and condensed for clarity. 

 

SnapStream: What made you want to have a career in the broadcast monitoring and compliance space? Why did you pick that particular career path?

Bernier: I started out working out for cable television when I was in high school, doing remote sports production. I decided at that point that I wanted to work in television and went to college for that purpose. Then shortly after I graduated from college, I landed a job at my hometown local TV station, WWNY-TV, as an engineer technician doing production work during the week and light maintenance engineering on the weekends. 

About four years into that, the chief engineer of the station was retiring. They were looking to change some of the processes around the engineering staff, and they asked me to take over as director of engineering for the station. From that point forward, I was always in the management of engineering. 

 

What did your day-to-day responsibilities entail at Turner?

They were all over the place. My team was responsible for supporting all of the on-air systems associated with the Turner Entertainment Networks and supporting systems. That happened to include our monitoring and compliance systems as well. So they were responsible for addressing any technical problems that arose, as well as the troubleshooting, installing, and replacing of equipment. My responsibilities for that were to manage that staff. I was also involved with strategic planning and look-aheads as far as technology and what's coming down the pike.

 

You were in the broadcast monitoring and compliance world for four decades. In what ways did you see the space change in your four decades in it?

Probably the most significant change came when we transitioned—we used to do this using long record, 24-hour logging VHS machines, which were basically slow-scan machines that would compress 24 hours into what where old VHS eight-hour tapes, which had horrible video quality and horrible, if any, audio quality.

It was basically used more for proving that commercials ran than anything else. It was not terribly useful for much more than that. When you made the jump to digital recording, and being able to encode and record on hard drives, now you had a much more robust recording that enabled my team to start using it for some troubleshooting in addition to simply validating whether or not the correct commercials aired and measuring the dropout time of any kind of on-air fault.

We could go back and reconstruct what actually occurred on air and, and use it to help in our diagnosis of either equipment or operational failures that caused what we call on-air disruptions or OADs. As that technology blossomed, going from simple MPEG2 records and into H.264 records, we were able to re-capture more of the actual stream itself, rather than just decode video and audio. That became another element of compliance monitoring that was extremely important. Especially as the FCC placed more requirements to provide statements of compliance for closed captioning and descriptive audio services.

On top of that we also started moving away from DTMF tones. In the cable universe we went from using DTMF tones in a secondary audio channel to trigger cable headends' local insertion opportunities to using SCTE-35 messages in the transport stream. We are able to decode those messages as well and verify that they went out, as they are part of the transport stream. 

This is important in terms of supporting our advertisers, as well as the distribution arm of Turner. They validate and confirm to our distribution partners, cable headends, and MVPDs that we did indeed include those SCTE messages for their local insertions, as well as the same messages being used as SCTE-104 messages in the streaming environment. 

All of that could not have been captured in the slow-scan VHS world that that we started with. Even prior to those, we were simply recording air-checks on regular videotape machines, which was horribly inefficient in terms of both storage space and price.

 

Recording air checks on regular videotape machines—that was when you first started in the early 80’s?

Early 80's would have been using regular videotape for spot air-checks. So if you had a high-profile show that was recording, you might record an air-check either on a one inch tape, possibly even a two inch quad tape, or three quarter inch tape. Sometimes you might have used a full-speed VHS tape. 

I want to say it was in the 90's probably that slow scan VHS came in. They actually spin them down so you could record. Their birth was in the security realm and then we just found them to be useful in terms of doing air-checks as well.

 

Before your retirement, you did one of the first SMPTE 2110 workflows for Turner. Could you give an overview of that experience, as well as your thoughts on the transition from SDI to SMPTE 2110? 

I go all the way back to analog. I saw the transition from analog video and audio into SDI. Then, the jump in bandwidth to include high-definition SDI, which was the mainstay up until very recently. 

SMPTE 2110 is the packetized version of the video and audio signal. We’ve migrated our plant to that standard. Everything that runs through our master control today is ST-2110 as a video source. 

We understand fundamentally both video and audio start as analog signals. It's how sight and hearing works in nature. Then, we've managed to take technology and digitize it, compress it, and find algorithms that allow us to move far more information in a smaller bandwidth to get to an end product where we then turn it back into an analog display. Because what people hear is analog audio and what you see is analog light video.

The use of ST-2110 made perfect sense, once we were able to get the switching speeds on routers to be able to handle it. It used to be that when you were routing or handling a video signal or an audio signal, even in the SDI world, the signal moved in one direction. You plugged it into an output of one device and an input into another device and the signal moved from, let's say, from left to right. 

ST-2110 is a data network essentially. It's actually bi-directional. So you take your cable, which could be a simple CAT-6 cable, and you plug it into a port on one device and plug it into the other device, and the communication between the two is bi-directional. And when you're dealing with some of the broadcast pieces, especially in terms of production, sometimes it gets a little tough getting your head wrapped around the fact that a device has both the input and output and it's running on one cable. It's a bit of a nuance that I've seen some people, counterparts, having a tough time grasping.

Once you're on a network, the signal itself is a stream, and is able to be joined by any device that's on that same network. So again, it's kind of like IP machines—if you know the IP address of a particular machine on the network, you can possibly look at its directory, if it's given you permission to do that, and you can pull files off of it or put files to it. Well, it's the same way with video and audio now. As long as you know what the IP address is and you have permission to access it, you can actually watch the video.

Very much like data networks, you don't necessarily have to limit yourself to the IP address itself. You also have what are in effect domain name servers that'll reconcile names to IP addresses themselves. So you're able to use names and name references rather than absolute IP addresses.

 

What do you think the future of broadcast monitoring and compliance holds?

Well, I think the future there is huge. What I'm seeing in terms of demand from our network management people—when I say network management I'm not talking about the data networking people, I'm talking about the TNT network, the TBS network—they want to know how their signals are being received by the consumer at the far end. 

That's always been kind of a trick, because once we send it out, it's out there and there are any number of elements that can adversely affect the consumers’ reception of that signal. As we move forward with more direct-to-consumer distribution platforms, the ability to get feedback on quality of service becomes more important, and tying that back into the actual timeline of our signals becomes extremely important to our management team and sales team.

An ability to gather a lot of different data metrics and performance metrics, and collect them all into a codified system that can then present it in a logical and sophisticated manner, becomes the key to the monitoring and compliance elements. 

The basic compliance stuff is already being done. They've tackled most of that in recent years. It's going to be the far end QoS experiences that we need to capture, then correlate it back to the playout timeline (as-run logs).


With SnapStream Monitoring & Compliance, you can monitor your feeds for regulatory compliance and advertising proof of performance. Our solution includes closed captioning verification, loudness monitoring, audio watermark detection, and more. SnapStream also offers tools for searching TVsharing TV clips to Twitter, Facebook, and more; and sharing clips of live events to social media in real-time. 

How TV & Cable Networks Should Gear Up for a Future of Addressable TV Ads (Part 3 of 3)

September 24 2019 by Tina Nazerian

SnapStream Series: The Future of Broadcast Monitoring & Compliance 

This is the third blog post of a three-part series on a future of addressable TV ads 

 james-shears-photo                                                                                   

 

Key Takeaways

As broadcast TV advertising becomes addressable, TV and cable networks will have new opportunities to generate revenue. To prepare for this future, ad sellers at TV and cable networks should: 

1) think about delivery mechanisms for commercials, including leveraging ACR/smart TVs 

2) consider making different versions of an ad for brands 

3) create test opportunities for ad buyers 

 


 

Earlier in our series, we looked into how Multichannel Video Programming Distributors (MVPDs) and local TV stations should prepare for a future of addressable TV advertisements. TV and cable networks have to get ready for that future too. 


James Shears, the Vice President of Advanced Advertising at Extreme Reach, a creative asset management platform that helps ads get to the screens they need to be, has advice on how TV and cable networks can position themselves to take advantage of this future.


Think About Delivery Mechanisms for Commercials

Shears says that TV and cable networks need to think about their delivery mechanisms for commercials. They can run addressable advertisements through the MVPDs and set-top boxes, and they can also use OTT and smart TVs, which rely a lot on Automatic Content Recognition (ACR) data. 

Shears says that MVPDs store every ad on the set-top box, which means the ads are pre-cached for play-out. On the other hand, because OTT and ACR/smart TVs leverage IP to deliver ads, the ads get served in a more real-time manner. 

TV and cable networks, he adds, already rely on MVPDs to deliver their content in the linear world. 

“MVPDs have a lot of power, so they would control the technology and aren’t as incentivized as smaller players to build customized tech,” he says. “The MVPDs also already have the economics worked out within the space, and can drive the business discussions. It’s wrapped up within the carriage agreements of the content on linear TV.” 

He explains that by going outside the MVPD route, TV and cable networks may earn more power and could push for more customized technology. And because the economics and business models of the OTT and ACR/smart TV worlds aren’t completely defined yet, “there’s room at the table for more discussion.”  

However, Shears recommends that it's probably best for TV and cable networks to use all of those options. 

“To have a successful addressable campaign, you need scale, and the broadcaster has to decide the best way to do that," Shears says. “In the current environment, it’s probably prudent to explore all avenues available. From there, the market may help decide on the tech that works best and can scale quickly.”

  

Segment Inventory at the Appropriate Level, and Consider "Creative Versioning" 

Shears estimates that TV and cable networks in the United States have between 13 and 18 minutes of commercial time per hour—which amounts to between $66 billion and $70 billion in ad revenue per year. 

Ad sellers at TV and cable networks need to make sure that they’re segmenting their inventory at an appropriate level, given that addressable means “essentially splitting the units” they would typically run.

“Instead of showing, say, JCPenney to the entire country, you might show a portion of the country JCPenney and a portion of the country a Ford ad or a Coca-Cola ad,” Shears says. 

He also stresses that it wouldn’t make sense for them to make every single unit addressable “this early on,” though it will eventually. He recommends that ad sellers at TV and cable networks think through their business models, and consider whether or not it would make sense to do what he calls “creative versioning.” 

“You sell one spot to Ford, as an example, and so the entire country will see a Ford ad,” he says. “But maybe one viewer sees a sports car ad and another sees a minivan ad. You're still selling the entire spot, but you're creating different versions for brands.” 

It’s also important for those ad sellers to identify inventory. Linear TV has a finite amount of inventory—for example, 15 minutes of non-programming time per hour. The broadcaster sets the market price for the commercials that will fill those 15 minutes. 

“Addressable in the linear TV space requires a slot to show the commercial to the viewer,” Shears says. “That means, it too has to take part of that 15 minutes. If that is true, it really becomes an economics exercise. If the broadcaster now can’t sell the same amount of inventory as before, how does it manage the yield?” 

Shears suggests that the broadcaster looks at the lowest yield it currently has on its books, and replaces it with addressable. 

“That’s really a decision for the broadcaster to ensure it can still maintain the same or hopefully more incremental revenue,” he says.

 

Get Buy-In from Advertisers 

Of course, Shears notes that addressable advertising will only be successful if ad sellers at TV and cable networks get buy-in from advertisers. To get that buy-in, they should create test opportunities for ad buyers. 

“It would depend on the brand, because HGTV has different advertisers than ESPN or Lifetime," he says. "Ad sellers should identify an opportunity within their endemic advertisers. So if it's HGTV, maybe the advertisers are Home Depot or Lumber Liquidators. They could then create opportunities based around those types of genres.” 

Shears says that addressable ads allow advertisers to measure the effectiveness of their campaigns. Discussions between ad sellers and ad buyers need to happen before campaigns start so everyone understands the success metric. Once that is understood, ad sellers can design campaigns appropriately. 

“This is not a set it and forget it type of product.”

 

Update November 27, 2019: A new report from Rethink TV found that addressable TV advertising will grow rapidly in the coming years, increasing from $15.6 billion in total worldwide revenue in 2019 to $85.5 billion by 2025. The report also found that this growth in addressable TV advertising will happen in "all sectors of pay TV and all major geographies" over the next six years. 


With SnapStream Monitoring & Compliance, you can monitor your feeds for regulatory compliance and advertising proof of performance. Our solution includes closed captioning verification, loudness monitoring, audio watermark detection, and more. SnapStream also offers tools for searching TVsharing TV clips to Twitter, Facebook, and more; and sharing clips of live events to social media in real-time. 

How Local TV Stations Should Prepare for a Future of Addressable TV Ads (Part 2 of 3)

September 19 2019 by Tina Nazerian

SnapStream Series: The Future of Broadcast Monitoring & Compliance 

This is the second blog post of a three-part series on a future of addressable TV ads 

 james-shears-photo                                                                                   

 

Key Takeaways

Addressable advertising can help local TV stations grow their revenues. To prepare for a future of addressable broadcast TV advertising, ad sellers at local TV stations should:

1) focus on their advertising sweet spots—for example, the automotive industry 

2) find different ways to get viewers' geographic location and geolocation data

3) segment viewers in a specific way, such as who is in the market for a pickup truck

 


 

As broadcast TV advertising is becoming addressable, it’s not just Multichannel Video Programming Distributors (MVPDs) such as Verizon, Comcast, and DirecTV who have to gear up for that future


Local TV stations have to prepare as well—and James Shears, the Vice President of Advanced Advertising at Extreme Reach, a creative asset management platform that helps ads get to the screens they need to be, has tips on how they can do so.


Think About Geographic Location, Geolocation, and Advertising Sweet Spots

Shears says that for local TV stations, addressable advertising is geared toward both geographic location and geolocation. 

“It’s really about understanding where the consumer is,” he says. “Can you target based on zip code? Can you target based on location derived from a cell phone? The answer is yes.” 

Local TV stations also need to consider what their advertising strong suits are. 

“Your sweet spots are probably automotive, sometimes real estate and finance, and sometimes quick service restaurants—all things that are really bundled up with geolocation,” Shears says. “The first thing that you should think about is the automotive industry.” 

For example, Shears says that ad buyers at tier two auto dealerships (a group of regional dealerships who have pooled ad budgets), stipulate a simplistic targeting approach, such as wanting to reach viewers within a particular zip code. 

However, ad sellers at local TV stations should try to segment their audience in a more specific way. For instance, they could identify viewers who are in the market for a pickup truck. 

 

Seek Different Ways to Get Viewers' Geographic Location and Geolocation

Shears says that local TV stations have several options when it comes to getting their viewers’ geographic location and geolocation data. 

Sometimes, they can use authenticated opportunities to gather first-party data. Maybe they have an app that people need to sign-in to use, or maybe they can run a sweepstakes online which viewers have to give their email addresses or names and physical addresses to enter. 

Local TV stations can partner with data companies to get that data too. 

“The stations should focus on those companies that offer insight into first, the home where the TV actually is,” Shears says. “And look at some forms of device graphs to measure effectiveness. Did the person that saw the ad go to car showroom, as an example?”

Additionally, Shears notes that the ATSC 3.0 broadcast standard will create data opportunities for local TV stations. 

“It will provide different data points,” Shears says. “Now, that's probably a year or two away, but what it does is it allows you to leverage IP addresses, and from IP addresses you can kind of back in to your audience. Obviously, it would be anonymized, but you can figure out their location and census-level information, typically about who would be in that household, et cetera. That will help you build out your data profile.”

 

Get Specific Data About Viewers

How can local broadcasters get very specific information about viewers, like who is the market for a pickup truck? 

“Some of it is behavioral,” Shears explains. “You can kind of figure out what shows they’re watching. If you're a local broadcaster, you're probably hyper-focused on news. You can get some insights in terms of what stories are really resonating with people—are there things throughout the daytime block that they're really focused on?” 

From there, he says a local broadcaster can partner with a data company that does “look-alike modeling,” which is also a common technique in online advertising. That means the data company will find audiences that look like one another. It would look at the characteristics of a specific segment, and then go find audiences that are similar. 

To trace that information back to particular viewers, local broadcasters can partner further with MVPDs, which have the viewership data that happens on their set-top boxes. From there, the MVPDs could pinpoint viewers in an anonymous, privacy-compliant way. 

Shears also points to Automatic Content Recognition (ACR) technology, which is quickly hitting mainstream. Smart TVs, he says, rely a lot on ACR data. Since they’re connected to the internet, they operate from an IP address. 

“From the IP address, data safe havens can be used to pull attributes from that specific household,” he explains. “Really, the companies can apply census-level information. On top of that, smart TVs and ACR technology can capture genres of shows, engagement scores, and so forth. Because of that, these data sets could be quite comprehensive, covering both behavioral and demographic.” 

Local broadcasters can also turn to Nielsen, which packages up “vast amounts of data,” including census-level information on who is watching particular programming. Nielsen also has data on things such as “overnight ratings for quick insights.” 

Ultimately, Shears believes that there are so many data sets and vendors available that local broadcasters should not focus on one option.


With SnapStream Monitoring & Compliance, you can monitor your feeds for regulatory compliance and advertising proof of performance. Our solution includes closed captioning verification, loudness monitoring, audio watermark detection, and more. SnapStream also offers tools for searching TVsharing TV clips to Twitter, Facebook, and more; and sharing clips of live events to social media in real-time. 

Closed Captioning on TV in the United States: 101

September 11 2019 by Tina Nazerian

closed-captioning-logo-closed-captioning-compliance

                                                                                                                           

Key Takeaways

In the United States, viewers can decide to turn on closed captions while watching TV. Closed captions: 

1) can appear in two different forms

2) are created differently depending on the type of programming 

3) fall under the FCC in the United States 

 


 

Have you ever seen text on your TV while watching the news or your favorite show? What you saw were probably closed captions. 

In the United States, closed captions refer to the transcription of a program’s audio that a viewer can choose to turn on.

Closed Captions vs. Subtitles in the United States 

In the United States, closed captions and subtitles look similar, but a major difference between them is their purpose. Whereas closed captions are typically used by viewers who are deaf or hard of hearing, subtitles are typically used by viewers who don't understand a video's original language, and need it translated via on-screen text. In some parts of the world, the term "subtitles" is used to refer to both use cases. 

 

 

Types of Closed Captioning on TV

An example of pop-on captions. 

 

 

An example of roll-up captions. 

Closed captions can appear in two different forms. Live broadcasts will typically have roll-up captions, while pre-recorded broadcasts will typically have pop-on captions. 

When the second line in a roll-up caption format begins, the first line shifts up to make space for that second line. The next text always appears in the same location, while the older text always moves up. With pop-on captions, however, entire blocks of text show up all at once. 

Additionally, there are two standards of closed captions for broadcast television. EIA-608 captions (also known as CEA-608 captions and Line 21 captions) “were the old standard for closed captioning of analog television,” writes Emily Griffin for 3Play Media, whereas EIA-708 captions (also known as CEA-708 captions) are “the new standard for closed captioning of digital television.” 708 captions are usually what you’ll see in over-the-air broadcasts today. 

608 captions allow for 2 bytes of data per frame of video, often called “byte pairs.” Sometimes those bytes are letters. With 608 captions, caption writers have customization options, including the ability to change the text’s foreground and background color. 708 captions, however, have ten times the bitrate of 608 captions. That makes 708 captions more customizable than 608 captions. For example, 708 captions support eight different fonts, as well as many more foreground and background colors and opacity values. 

Feature 708 608
Background Colors 64 8
Foreground Colors 64 8
Edge Colors 64 0
Font Choice  Yes (8 fonts)  No (whatever TV renders) 
Font Can Be Underlined Yes  Yes
Font Can Be Italicized  Yes  Yes
Font Size Option  Yes (3 font sizes)  No (just 1 font size) 

 

How TV Captions are Made

According to the Media Access Group at WGBH, an organization that has led the way for captioning and described media, captions are made differently depending on the type of programming. 

The Media Access Group at WGBH explains that for pre-produced programs like drama series, trained caption writers, “using special captioning software, transcribe the audio portion of a program into a computer, inserting codes that determine when and where each caption will appear on the TV screen.” 

After the captions have been properly “timed and placed,” the data is “then recorded, or encoded, onto a copy of the master videotape.” Afterwards, a “decoder attached to or built into a television receiver can render the captions visible.” Because these captions are created in advance, they can come close to being completely accurate. 

The process is different for live programs. As the Media Access Group at WGBH notes, “captions created for live broadcast are not timed or positioned and rarely convey information other than the spoken dialogue. The data is encoded into the broadcast signal continuously as the program airs.” 

Live captions will typically have a time delay between 5 - 10 seconds. The delay isn’t constant, and can vary even within a particular program.

There are four different ways of captioning live programming: 

  1. stenographic captioning
  2. manual live display 
  3. electronic newsroom
  4. hybrid system

 

image3

The color choices seen in Adobe Premiere Pro while creating 608 captions. 

 

image5

The color choices seen in Adobe Premiere Pro while making 708 captions. 

Legal Requirements for Broadcasters 

Under the FCC’s rules, in the United States, both distributors (TV stations as well as cable and satellite providers) and program producers are responsible for closed captioning compliance, explains communications lawyer Scott Flick

“For this reason, most distributors expect their program producers to provide them with a certification that the producer has followed the FCC’s best practices for captioning, which protects the distributor from fines if the captioning is deficient—unless the distributor knew that the producer’s certification was false,” Flick adds. 

The FCC states that closed captions on TV should be accurate, synchronous, complete, and properly placed. The FCC explains that it understands that there are “greater hurdles involved with captioning live and near-live programming,” and as such, distinguishes between pre-recorded, live, and near-live programming in its rules. 

However, the FCC offers some self-implementing exemptions from the closed captioning rules. For example, one self-implementing exemption is for instructional programming that is “locally produced by public television stations for use in grades K-12 and post secondary schools.” The FCC also has “economically burdensome” exemptions.

Viewers can directly report closed captioning issues to their video programming distributor, or file their complaints with the FCC, who will then send it to the video programming distributor. 

Once the video programming distributor has gotten the complaint, it must respond within 30 days. If a video programming distributor wasn’t compliant, or can’t prove that it was compliant, it could face fines. 

Flick notes that the FCC doesn’t have a “base fine” for captioning violations—it deems each episode of a program with defective captions to be a separate violation. 

“As a result, even a modest ‘per episode’ fine can add up quickly once multiplied by the number of programs that were not properly captioned.” 


With Moco: Compliance Monitoring by SnapStream, you can monitor your feeds for regulatory compliance and advertising proof-of-performance. Our solution includes closed captioning verification, loudness monitoring, audio watermark detection, and more.

With SnapStream's flagship product, you can search TV. You can also easily create clips that power audience engagement, boost your brand's influence, and drive monetization. Shape the narrative in three quick steps—find memorable moments from broadcast TV and your own video streams, transform them, and share them to Facebook, Twitter, and more. 

How MVPDs Should Prepare for a Future of Addressable TV Ads (Part 1 of 3)

September 10 2019 by Tina Nazerian

SnapStream Series: The Future of Broadcast Monitoring & Compliance 

This is the first blog post of a three-part series on a future of addressable TV ads 

 james-shears-photo                                                                                   

 

Key Takeaways

Broadcast TV advertising is becoming addressable, meaning new opportunities for MVPDs to grow their revenues. To prepare for this future, ad sellers at MVPDs should: 

1) assemble data from their in-house databases and also partner with third-party data companies  

2) consider their viewers' geolocations 

3) collaborate with advertisers to hone in on the audiences they want to reach

 


 

Broadcast TV advertising is becoming addressable, meaning advertisers can better segment and target prospective customers. 

James Shears is the Vice President of Advanced Advertising at Extreme Reach, a creative asset management platform that helps ads get to the screens they need to be. He was previously at Dish Network, where he started the world’s first impression-by-impression platform for linear addressable TV

“TV typically has a finite amount of inventory,” Shears explains. But addressable environments “create additional opportunities for revenue.” 

According to Shears, TV is moving towards an IP-delivered future. And when everything becomes IP-delivered, whether or not viewers are watching programming on set-top boxes, content owners and advertisers will have to think about addressable advertisements, and about personalizing ads as well. 

He explained that most of the addressable ads viewers see today are still the same ads they would see if they were watching linear TV. 

“You’re delivered a product that you’re probably in the market for, but it doesn’t mean that the ad is catered to you specifically.” 

Advertisers need to ask themselves how they can make their ads more dynamic and personalized to drive engagement. They need to create the appropriate experience for prospective customers, and get an appropriate ad in front of them. 

“If you’re running addressable and you’re running the same copies that you would typically, it’s really going to fall flat,” Shears says.  

Of course, it’s not just advertisers who have to prepare for a future of addressable advertising on TV—broadcasters obviously have to as well. Here are Shears’ tips on how ad sellers at Multichannel Video Programming Distributors (MVPDs) such as Verizon, Comcast, and DirecTV, can navigate addressable advertising. 


Dive into the Data

One benefit MVPDs have? Because they’re sending bills, they have a good amount of information on their subscriber base, including names and addresses. 

“Addressable is really run by the data,” Shears says. 

He says MVPDs need to spend time sifting through what that data actually means. By partnering with a data company (such as Experian, LiveRamp, Neustar, Epsilon, and Acxiom), an MVPD would get a more high-level view of who exactly makes up its customer-base. 

“In most instances, these data companies provide two functions,” he explains. “First, they are database management companies, so they house CRM lists for brands. Second, they act as a safe haven for matching purposes. In that scenario, the MVPD would pass their subscriber file to the safe haven, who would then match that list with other census-level information to create a truer picture of who lives in the household.”

MVPDs should create census-level information around households for off-the-shelf segments, such as age, gender, presence of children, household income, and education level. They could then determine if those segments are appropriate for advertisers. 

Shears also thinks that MVPDs should get more customized with their data offerings—they “should be working with their advertisers to help them come up with interesting segmentation around the brands’ first party data” such as “heavy users of a particular product.” 

MVPDs might also have opportunities to leverage viewership data. For example, perhaps a particular segment of an audience consistently watches a certain program or genre. 

 

Consider Viewers’ Geolocation

A crucial piece of data MVPDs have on their subscribers is their geolocation. Because they know the physical home addresses of their consumers, they can create targeting segments around those locations. 

For example, maybe a subscriber lives within a two-mile radius of a McDonald’s and a Burger King. 

“That probably means they're more apt to be responsive” to ads for both of those fast food chains. 

Additionally, since MVPDs have their viewers’ home addresses, they can “back into” the IP addresses of those homes. 

“Once they have that, they could essentially create a device graph of all the devices in the home that are pinging the home IP address,” Shears notes. “That information is valuable both from a targeting perspective and also a measurement/attribution perspective.”

 

Dig Into The Exact Audience Advertisers Want To Reach

Ad sellers at MVPDs can brainstorm what some of the most popular targeting segments are (for example, maybe it’s those in the market for a car, or those with children), but Shears stresses the importance of having conversations with their advertisers to determine what it is that they’re looking for. 

If a brand is an auto manufacturer, for example, the client will probably want to reach consumers in the market for a car. If a brand is a restaurant, it would probably be more interested in geolocation targeting. 

“You need to think about ways that you can address people in the appropriate way, so you can target at the appropriate level,” Shears says. 

The first question MVPDs should ask advertisers is what metric they’re trying to measure.

Then, Shears says, MVPDs need to learn about advertisers’ consumers. They should ask advertisers questions that are “more defined than the traditional age/gender type of targeting that is done in linear television today.” Two examples of questions MVPDs could ask advertisers are “‘What does a client of the brand typically look like?’” and “Who are the typical heavy users of the product?’” 

Once ad sellers at MVPDs find out what an advertiser wants, they can help them measure and sculpt out their KPIs. Then, they can drive responses for the advertiser.

Ultimately, Shears explains, MVPDs have to make addressable “super easy” for advertisers to try out. He says that a first-time advertiser is probably seeking to target more off-the-shelf segments, such as household income, presence of children, and education level. That advertiser will generate brand awareness and increase sales. 

“To really benefit from addressable, though, they’ll want to come back and customize segments rather than use off-the-shelf segments.” 


With SnapStream's broadcast monitoring and compliance product, you will be able to monitor your feeds for regulatory compliance and advertising proof of performance. SnapStream includes as-run log integration, loudness compliance, and more. You can also use SnapStream to search, clip, and share live and recorded TV. 

How 3 Broadcast Industry Professionals Would Evaluate Their Next Broadcast Monitoring and Compliance Solution

August 29 2019 by Tina Nazerian

newshutterstock_146581445                                                                                                                                                             sirtravelalot/Shutterstock

A great broadcast monitoring and compliance product is the difference between being prepared with evidence or scrambling when a customer files a loudness complaint with the FCC, an advertising client alleges that an ad didn’t run properly, a viewer claims that captions didn’t run properly, and more. 

It’s crucial to properly evaluate any broadcast monitoring and compliance system prior to making a purchasing decision. SnapStream asked a few anonymous industry professionals—a studio broadcast engineer at a local television station group, a maintenance engineering director at a cable company, and a field operations director at a Multichannel Video Programming Distributor (MVPD)—how they would evaluate their next broadcast monitoring and compliance solution. Here’s a selection of the questions they told us they would ask. 

 

Maintenance Engineering Director, Cable Company 

Question they’d ask: “Which current mandated compliance monitoring features does the product support?”

Our take: A baseline piece of information to determine is which of the things required by law in your country the product will log and monitor. 

For example, in the United States, all video programming distributors need to close caption their TV programs. They also have to follow the CALM Act, which states that the audio of TV commercials can’t be over a certain amount louder than the TV program they’re accompanying. 

Any broadcast monitoring and compliance product that you’re considering should equip you to meet your country’s regulatory requirements. 

Question they’d ask: “Which business required monitoring features does the product support?” 

Our take: In addition to regulatory compliance, broadcasters have to deal with business required monitoring. Take Nielsen Audio Watermarks—they are an integral part of Nielsen ratings, which give broadcasters important information about the audience size and composition. If your organization embeds Nielsen Audio Watermarks, for instance, make sure that the system has a detection method in place to alert you when the watermarks aren’t present. 

Additionally, you might need to use your broadcast monitoring and compliance system for advertising proof of performance. Showing your advertisers exactly how their ad ran—and that the beginning and end of the ad didn’t clip—can help you sell more ads. 

Question they’d ask: “Is the product capable of meeting future needs/functionality in a SMPTE 2110, Dolby Atmos, 4K, and streaming (D2C) world?”

Our take: As the broadcast monitoring and compliance space goes through technological changes, it’s important that any product you’re evaluating has a team behind it that’s constantly staying on top of new developments and iterating. In the broadcast monitoring and compliance world, this means having a path to SMPTE 2110 support (SDI over IP workflows with uncompressed and compressed video), Dolby's latest standard for audio (Dolby Atmos), 4K video, and IP ingest and monitoring for OTT channel monitoring. 

If standards and business needs change, you want your system to change too.

 

Studio Broadcast Engineer, Local Television Station Group

Question they’d ask: “What kind of configurability does the product have and does it support your custom workflows?"

Our take: Maybe in addition to recording everything, you want to record at a lower bitrate to save storage, or create and share clips of your recordings. Or integrate to your cloud storage provider like Amazon S3 or your online video player (OVP) like Brightcove or Ooyala.

A system that has the right configuration options and custom workflows can help you save time.

Question they’d ask: “How long is this system going to go before I need to reboot it?”

Our take: Asking about a system’s long-term stability  is important. Whether you’re tracking your feeds for loudness compliance, ad verification, air checks, or something else, it’s crucial that the system you’re using has high availability—ideally, it can run for weeks and months continuously.

If the system requires frequent reboots, your logs will be incomplete, not to mention the extra hassle involved in administration and maintenance.

 

Field Operations Director, Multichannel Video Programming Distributor (MVPD)

Question they’d ask: “How does this product allow me to be as proactive as possible to alert me of any issues, rather than being reactive?”

Our take: If there is an interruption on your feeds, you want to get a notification as soon as possible. The moment you get that alert, you can start fixing the problem. 

A system that doesn’t give you alerts means you might not find out when you get a black frame video, pixelation, or loss of closed captions or Nielsen Audio Watermarks. 

Question they’d ask: “Should we move forward with your product, how do you convince me that we will receive adequate technical support, being that we’re a 24x7 operation?”

Our take: During a crisis, you might require the product’s support team. 

A support team should quickly respond to your initial support request, and have the technical expertise and willingness to do whatever it takes to solve your problem. And if they’re available around-the-clock, it means that whenever something goes wrong, you won’t be left to figure it out on your own.


With SnapStream Monitoring & Compliance, you can monitor your feeds for regulatory compliance and advertising proof of performance. Our solution includes closed captioning verification, loudness monitoring, audio watermark detection, and more. SnapStream also offers tools for searching TVsharing TV clips to Twitter, Facebook, and more; and sharing clips of live events to social media in real-time. 

Loudness Compliance and the CALM Act: What You Need to Know

June 17 2019 by Tina Nazerian

calm act - loudness compliance - sound-waves-and-human-ear-1

                                                                                                                                                                                                                       Pixsooz/Shutterstock

While watching TV, have you ever heard the volume increase when your show jumped to a commercial break?

The volume increase could have been the result of systems that hadn’t normalized the content based on the loudness.

Citing industry officials, the Los Angeles Times reported that due to the switch to digital TV in the United States in 2009, “the higher fidelity sound made the commercials seem even louder.” In 2006, the ITU-R had created a loudness algorithm (referred to as BS.1770-#, which nowadays has five variants) to help make sure commercials were not blaringly louder than the programs they were accompanying.

That algorithm makes “Loudness Units relative to Full Scale” (LUFS), also known as “Loudness, K-weighted, relative to Full Scale” (LKFS). LKFS is technically an amplitude level, but it’s not just the measure of an electrical signal. It’s an attempt to measure how humans perceive the loudness of broadcast audio.

graph

This graph represents the filter applied to the raw audio input so it can be adjusted to compensate for how humans perceive the loudness of different frequencies. K-weighting is part of the equation used to determine the LKFS value. It has two parts— the first is weighting different frequencies based on how loud they’re perceived. The second is modeled after the “acoustic effect” of the human head.

Here’s an analogy to help you understand LKFS: audio level is to LKFS what temperature is to wind chill temperature (or heat index). Humans don’t perceive low frequencies as sounding as loud as they actually are, but they perceive high pitched sounds to be louder than they actually are. That’s why high pitched sounds have a higher K-weighting.  

The loudness algorithm the ITU-R created was not implemented in the United States until a few years later. In 2010, Congress passed the CALM Act. The law came into effect on December 13, 2012. It stipulates that in relation to the TV programs they are accompanying, all commercials must have their average loudness adjusted to be within a fairly narrow range of a fixed target. The law only applies to television programming—it does not apply to radio or internet programming. 

Key Facts about the Commercial Advertisement Loudness Mitigation (CALM) Act

  • Congress passed the CALM Act in 2010 to regulate the audio levels of TV commercials in relation to the TV programs they're accompanying. 
  • California Congresswoman Anna Eshoo authored the CALM Act. Part of her inspiration? The LA Times reports that she was "blasted by blaring ads on TV during a family holiday gathering." 
  • For loudness compliance, the CALM Act references a document called ATSC A/85 RP. 

For compliance, the law points broadcasters, cable operators, satellite TV providers, and other multichannel video programming distributors to the ATSC A/85 RP.

A/85 RP stipulates the use of ITU-R BS.1770-1 in the United States. It also recommends the adoption of a fixed target loudness of -24 LKFS. Annex I.7 of the ATSC A/85 RP states that there should be a fixed target loudness of -24 LKFS, + or - 2 dB.

A/85 RP also notes requirements other than loudness, one example being dialnorm. Dialnorm means “dialogue normalization.” Dialnorm specifies the average dialogue level for audio in absolute terms. Say you’re going from your main program to a commercial. The main program features soft-spoken people, whereas the commercial features loud people. On playback, the consumer receiver would automatically modulate the low dialogue up, and the loud dialogue down.

dialnorm1

A visual representation of how dialnorm works with a consumer cable box.

Luckily, as Dave Moulton wrote in TV Technology, if you’re using dialnorm, “you don’t need to worry very much about LKFS, because properly implemented dialnorm will pretty much take care of it for you.”

It’s important to stay on the right side of the CALM Act. If viewers complain to the FCC about your organization’s loudness level, and the FCC notices a pattern of complaints, it will start an inquiry or investigation for your organization. If there is an investigation, you’ll have to spend time proving that your equipment, and how you’ve maintained it, is in line with the law. If you don’t show actual or ongoing compliance in response to the inquiry or investigation, you may have to pay a fine.

Having a record of exactly what your programming sounded like when it aired will save you hassle and frustration. You will quickly be able to gather evidence and respond to viewer complaints.


Loudness compliance is easy with Moco: Compliance Monitoring by SnapStream. We provide TV stations, networks, and other broadcasters solutions for logging and monitoring loudness. 

What is SnapStream? There's an unlimited amount of video content out there: 24/7 news channels, breaking news events, sports, talk shows, awards galas, entertainment shows, and so much more.

SnapStream makes a real-time news and media search engine that makes it fast and easy to find the video moments that support our customers telling great stories.

Posts by Topic

see all