The television of the future is being invented in the Eurecom laboratories, at Sophia-Antipolis. This is not about creating technologies for even bigger, sharper screens, but rather reinventing the way TV is used. In a French Unique Interministerial Fund (FUI) project named NexGenTV, researchers, broadcasters and developers have joined forces to achieve this goal. The project was launched in 2015 for a duration of three years, and is already showing results. Raphaël Troncy is a Eurecom researcher in data sciences who is involved in the project. He presents the progress that has been made to date and the potential for improvement, primarily based on enriching content with a secondary screen.
With the NexGenTV project, you are trying to reinvent the way we use TV. Where did this idea come from?
Raphaël Troncy: This is a fairly widespread movement in the audiovisual sector. TV channels are realizing that people are using their television screens less and less to watch their programs. They watch them through other modes, such as replay, or special mobile phone apps, and do other things at the same time. TV channels are pushing for innovative applications. The problem is that nobody really knows what to do, because nobody knows what users want. At Eurecom, we worked on an initial project, financed by the FP7 European program called LinkedTV. We worked with users to find out what they want, and what the channels want. Then with NexGenTV we focused on applications for a second screen, like a tablet, to offer enriched content to viewers, as well as affording TV channels the ability to maintain editorial content.
Although the project won’t be completed until next year, have you already developed promising applications?
RT: Yes, our technology has been used since last summer by the tablet app for the beIN Sports channels. The technology allows users to automatically select the highlights and access additional content for Ligue 1 football matches. Users can access events such as goals or fouls, they can see who was touching the ball at a given moment, or statistics on each player, all in real time. We are working towards offering information such as replays of similar goals by other players in the championship, or goals in previous matches by the player who has just scored.
In this example, what is your contribution as a researcher?
RT: The technology we have developed opens up several possibilities. Firstly, it collects and formats the data sent by service providers. For example, the number of kilometers a player covers, or images of the goals. This is challenging, because live conditions mean that this needs to happen within the few seconds between the real event and the moment it is broadcast to the user. Secondly, the technology performs semantic analysis, extracting data from players’ sites, official FIFA or French Football Federation sites, or Wikipedia, to provide a condensed version to the TV viewer.
Do you also perform image analysis, for example to enable viewers to watch similar goals?
RT: We did this for the first prototypes, but we realized that the data provided were rich enough already. However, we do analyze images for another use: the many political debates that are happening at present during the election period. There is not yet an application for this, we are developing it. But we practiced on the debates for the two primary elections, and we are carrying on this practice for the current and upcoming debates for the presidential and legislative elections. We would like to be able to put an extract of a candidate’s previous speech on the tablet while they are talking about a particular subject. Either because what they are saying is complementary, contradictory, or linked to a proposition that is relevant to their program. We also want to be able to isolate the “best moments” based on parallel activity on Twitter, or on a semantic analysis of the candidates’ speeches, and offer a condensed summary.
What is the value of image analysis in this project?
RT: For replays, image analysis allows us to better segment a program, to offer the viewer a frame of reference. But it also provides access to specific information. For example, during the last debate of the right-wing primary election, we measured the on-screen presence time of the candidates, using facial recognition programmed by deep learning. We wanted to see if there was a difference in the way the candidates were treated, or if there was an equal balance, as is the case with speaking time controlled by the CSA (French institution for media regulation). We realized that broadcasters’ choice was more heavily weighted towards Nicolas Sarkozy than the other candidates. This can be explained, because he was strongly challenged by the other candidates, and so he was focused on when he didn’t speak. But this also demonstrates how an image recognition application can provide viewers with keys to interpreting programs.
The goal of your technology is to give even more context, and inform the user?
RT: Not necessarily, we also have an example of use with an educational program broadcast on France Télévisions. In this case, we wanted to provide viewers with quizzes, to provide educational material. We are also working on adapting advertising to users for replay viewing. The idea is to make the most of the potential of secondary screens to improve the user’s experience.
NexGenTV: a consortium for inventing the new TV
The NexGenTV project combines researchers from Eurecom and the Irisa joint research unit (co-supervised by IMT). The consortium also includes companies from the audiovisual sector. Wildmoka is in charge of creating applications for secondary screens, along with Envivio (taken over by Ericsson) and Avisto. The partners are working in collaboration with an associated broadcasters club, who provide the audiovisual content required for creating the applications. The club includes France Télévisions, TF1, Canal+, etc.