LG has come up with a surprisingly interesting way to apply AI to TV
Generally speaking, at CES and elsewhere, when a company says something is powered by AI, they’re blowing smoke. And while smoke was definitely blown at LG’s otherwise unremarkable press conference this morning, the company also announced it was applying AI in a way that’s both unexpected and smart: intelligently enhancing TV images using computer vision.
Now, before anyone accuses me of falling for the hype, let me just say that this feature is totally unnecessary and probably a bad idea in a lot of ways — a high-quality and correctly calibrated display panel will give you an excellent image, and things like motion interpolation and intelligent detail enhancement may only worsen it. No, I just think it’s a cool idea.
The basic issue is this: given an image on the screen, there are different things that need to happen to make it look better. Color banding can be smoothed, for instance, but if that smoothing operation covers the whole screen, it might obscure important details. So you only want to smooth part of the screen, while perhaps sharpening the more high-contrast parts.
This may be accomplished by a number of means, and one is to intelligently identify edges in the image. Then you can divide it into pieces or just sharpen along those edges to emphasize them. But that can be weird if, for example, a building is intersecting the horizon line — they’ll both get the same enhancement treatment as if the building is part of the land. Basically different parts of the image require different operations, and it’s not always obvious.
What LG’s latest TVs do, or what it claims they do at any rate, is apply actual object recognition AI to the problem. This is the kind of thing that, in a specialized form, identifies faces in a picture or can tell whether something is a dog or a cat.
In this case, even rudimentary object recognition would allow for a scene to be parsed more intelligently: sky distinct from landscape; landscape distinct from buildings; people and cars distinct from buildings; objects on tables distinct from the tables themselves — and so on.
Not necessarily all at once, you understand. LG’s brief handling of the capability onstage and failure to mention it in any real detail in its announcements suggest to me that this process is at an early stage — for all we know it may be totally ineffective at this point.
But it’s a fun idea and a smart one — something that’s rather rare at CES. Judicious application of this idea could, for example, allow for the TV to identify items moving jerkily and apply frame interpolation to those only, or allow people to choose which classes of images or objects will receive sharpening, color correction, and so on.
I fully expect object-based enhancement to be a standard feature in TVs over the next few years (if not by the end of the week — it is CES after all), though, of course, truly useful or imaginative applications of it will probably take a little longer.