Visualize climate change by AI models?

本小妞迷上赌 提交于 2021-01-09 17:04:48

Generative AI models have been co-opted to synthesize things from faces and apartments to butterflies, but a novel subcategory seeks to bring awareness to climate change by illustrating the consequences of catastrophic flooding. In an effort to establish a metric to quantify the veracity of these synthetic climate change images, researchers University of Montreal and Stanford University researchers recently detailed “several” evaluation methods in a preprint paper. They say that their work, while preliminary, begins to bridge the gap between automated and human-based generative quantification.

The research was notably coauthored by Turing Award winner and University of Montreal professor Yoshua Bengio, who was one of the first to combine neural networks with probabilistic models of sequences. In a paper published nearly two decades ago, he introduced the concept of word embeddings, a language modeling and feature learning paradigm in which words or phrases from a vocabulary are mapped to vectors of real numbers. Embeddings — and Bengio’s more recent work with computer scientist and Google Brain researcher Ian Goodfellow on generative adversarial networks (GANs) — have revolutionized the fields of machine translation, image generation, audio synthesis, and text to speech systems.

“Historically, climate change has been an issue around which it is hard to mobilize collective action … One reason [is] that it is difficult for people to mentally simulate the complex and probabilistic effects of climate change, which are often perceived to be distant in terms of time and space,” wrote the paper’s coauthors. “Climate communication literature has asserted that effective communications arises from messages that are emotionally charged and personally relevant over traditional forms of expert communication such as scientific reports, and that images in particular are key in increasing the awareness and concern regarding the issue of climate change.”

The researchers note that existing evaluation methods that could be applied to generated climate change images have “strong limitations” in that they don’t correlate with human judgement, which makes measuring the sophistication of the image generation models difficult. They propose an alternative in a manual process involving human volunteers tasked with evaluating image-HUC99 style combinations drawn from models, based on input images of diverse locations and building types (houses, farms, streets, cities), each with over a dozen AI-generated styles. The evaluators choose between real and half-generated images, and an average error rate is computed reflecting the proportion of evaluators who judged the image as real, with higher values indicating more realistic images.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!