AT&T's Blueprint For The Future of Mobile VideoAT&T's Blueprint For The Future of Mobile Video
AT&T's Blueprint For The Future of Mobile Video
September 18, 2017
The acquisition of LiveClips by DirecTV in 2014 followed by their acquisition by AT&T brought Eric full circle since he first joined AT&T Bell Labs Research in 1984. After his EE PhD and Physics MS from U. of I. Urbana-Champaign, Eric continued audio-visual speech research at Bell Labs while also playing a key role in the HDTV competition leading to the Grand Alliance and ATSC standards. His leadership of the MPEG-4 Face and Body Animation group and facial capture system development at Lucent Technologies resulted in spinning out face2face animation, inc. from Lucent New Ventures Group in 2000.
In his role as Principal Systems Engineer, Eric is a video, computer vision, and AI expert at AT&T where he builds systems for the estimation of mobile video quality of experience in a variety of application scenarios. We were lucky enough to gain his personal insights into the future of mobile video, the importance of Quality of Experience, as well as the future of visual entertainment in general.
100,000 Simultaneously Streaming Movies
Nine days ago, AT&T successfully tested a single-wavelength 400 gigabit Ethernet data space across its production network—promising service providers the ability to deliver ten 2-hour movie downloads in less than a second, or 100,000 simultaneously streaming movies. They describe it as a ‘future network blueprint’ for service providers and businesses alike.
With mobile video accounting for more than half of all network traffic on the AT&T mobile network, it’s easy to see why they’ve framed this accomplishment in relation to movie downloads and streaming: network speed matters for mobile video. “Mobile video delivery is dependent on network quality which is not consistent across the world—3G is still common outside the US,” explains Dr. Petajan. “Mobile network quality varies by country and carrier, as does the cost of mobile data, which further impacts accessibility to good quality mobile video content consumption.”
Improving network coverage is then a big push factor for the company, but only insofar as it improves the user experience overall. This is reflected by Dr. Petajan’s own departmental focus on the accurate and automatic estimation of mobile video quality experience, or QoE, which he says is vital in order to adjust transcoding, packaging, and mobile client player adaptation logic and provide viewers with the best possible experience. It does this by anticipating which factors will impact QoE and then attempting to mitigate them in advance.
Say Goodbye To Buffering
“Video stream quality (resolution, frame rate, lack of compression artifacts), as well as uninterrupted delivery of video (low startup delay and no stalls) are important for the end user video experience,” Dr. Petajan argues. “Automatic QoE estimation from network data help guide network improvements and detect outages. Network-based QoE models are most accurate when trained using ground truth data from Full Reference (FR) QoE KPI measurements.”
“Adaptive bitrate video QoE is composed of Video Quality (VQ) and Delivery Quality (DQ),” he explains. “The player attempts to balance VQ vs DQ during a Live or Video on Demand (VoD) streaming session when bandwidth is fluctuating.”
“For example, the initial start-up delay / buffering time (a DQ KPI) will be worse when the initial VQ is higher. An objective VQ metric that is subjectively calibrated indicates when bits are being wasted if VQ is too high for the display size. Both VQ and DQ KPIs must first be calibrated through subjective testing and modelling of human perception before automatic QoE estimation can be performed.”
Dr. Petajan explains that, when viewing content, factors such as network quality, device capability, display size and viewing environment all affect the fine balance between Video Quality with Delivery Quality. These factors will no doubt be improved by the accelerating deployment of computer technology and ultra-fast networks. “As mobile entertainment experiences become more immersive and interactive, low latency data networking and edge computing will be needed to provide an entertaining QoE.”
Visual Entertainment: The Near Future
Recommendation engines already use machine learning to predict content that users will enjoy watching. However, AI and machine learning will not only improve content recommendation, but content production itself. “In the realm of content production, efficiency and quality are improved by face and object recognition, as well as speech-to-text. Sports and news productions, for instance, already use automatic metadata extraction to generate user-facing statistics and to clip plays from live sporting events.
This will extend into the far future of visual entertainment, which Dr. Petajan believes will include automated content production and real-time immersive experiences that will require new QoE estimation tools to minimize network utilization, as well as client- and cloud-side computing resources. He predicts that visual QoE metrics will be developed for new forms of immersive entertainment, from VR to AR. “User face and body animation data streams will drive high quality avatars, while user speech and emotional state will be recognized for conversational interface interaction with other avatars.”
“Both human-driven and AI-driven avatars will be present in future virtual worlds and distinguishing between them will be difficult. Continuous identification of users by face and voice recognition—and lip reading—will eliminate the need for passwords to access content and purchase products during immersive experiences.”
Moving forward: The AI Summit San Francisco
“The industry has made tremendous leaps in technology to greatly improve the end-user experience of video. I hope to discuss cutting edge developments in this space with the AI Summit attendees,” he says. At the Summit, Dr. Petajan also plans to share information about the AT&T Video Optimizer, a free, open-sourced diagnostic tool that mobile app developers can leverage in order to identify issues that impact user experience or waste network / device resources.
Meanwhile, AT&T plan to launch their Video Experience Analyzer. Part of the AT&T Entertainment Experience Suite, it will provide QoE estimates for the optimization of any streaming video delivery pipeline by content providers and application developers.
He adds, “Of course, I also look forward to learning about the latest AI technology advances and applications, and connect with other experts”
Dr. Eric Petajan will be speaking at next week’s AI Summit San Francisco. His keynote speech is entitled ‘Automatic Estimation of Mobile Video QoE and the Future of Visual Entertainment’.