avatarYoussef Hosni

Summary

The provided content outlines significant advancements in computer vision research from the second week of January 2024, focusing on important papers in areas such as stable diffusion, vision-language models, image and video generation and editing, and image recognition.

Abstract

The article presents a curated overview of the latest breakthroughs in computer vision, emphasizing the most influential research papers published during the second week of January 2024. It covers various subfields, including image recognition, vision model optimization, generative adversarial networks (GANs), image segmentation, and video analysis. The article highlights innovative techniques such as Object-Centric Diffusion for efficient video editing, diffusion priors for dynamic view synthesis, and fast score-based diffusion models for acoustic echo cancellation in the stable diffusion category. In the realm of vision-language models, it discusses the personalization of text-to-image models and the distillation of vision-language models on millions of videos. The section on image generation and editing showcases advancements in human-aligned evaluation metrics for text-to-3D generation, amortized generative 3D Gaussians for single image to 3D conversion, and fast and controllable image generation with latent consistency models. The video generation and editing segment introduces a large-scale dataset for automated end-to-end video dubbing, multi-stage high-aesthetic video generation, jump cut smoothing for talking heads, and universal relightable hand models. Lastly, the image recognition section delves into denoising vision transformers and open-vocabulary segmentation models capable of interactively segmenting and recognizing thousands of classes.

Opinions

  • The author suggests that the insights shared in the article are valuable for researchers, practitioners, and enthusiasts in the field of computer vision.
  • Subscribing to the "To Data & Beyond" newsletter is recommended for those interested in staying updated with the latest AI advancements and gaining inspiration for future work.
  • The article promotes the use of an AI service, ZAI.chat, as a cost-effective alternative to ChatGPT Plus (GPT-4), offering similar performance and functions.
  • The author emphasizes the importance of the work presented, stating that it can inspire action or prepare readers for the future of AI.
  • The article encourages support for the author's work through clapping for the story, subscribing to the newsletter, following on social media platforms, and considering mentoring sessions for career development in data science and AI.

Top Important Computer Vision Papers for the Week from 08/01 to 14/01

Stay Updated with Recent Computer Vision Research

Every week, several top-tier academic conferences and journals showcased innovative research in computer vision, presenting exciting breakthroughs in various subfields such as image recognition, vision model optimization, generative adversarial networks (GANs), image segmentation, video analysis, and more.

This article provides a comprehensive overview of the most significant papers published in the Second Week of January 2024, highlighting the latest research and advancements in computer vision. Whether you’re a researcher, practitioner, or enthusiast, this article will provide valuable insights into the state-of-the-art techniques and tools in computer vision.

Table of Contents:

  1. Stable Diffusion
  2. Vision Language Models
  3. Image Generation & Editing
  4. Video Generation & Editing
  5. Image Recognition

Most insights I share in Medium have previously been shared in my weekly newsletter, To Data & Beyond.

If you want to be up-to-date with the frenetic world of AI while also feeling inspired to take action or, at the very least, to be well-prepared for the future ahead of us, this is for you.

🏝Subscribe below🏝 to become an AI leader among your peers and receive content not present in any other platform, including Medium:

1. Stable Diffusion

1.1. Object-Centric Diffusion for Efficient Video Editing

Diffusion-based video editing has reached impressive quality and can transform either the global style, local structure, and attributes of given video inputs, following textual edit prompts. However, such solutions typically incur heavy memory and computational costs to generate temporally coherent frames, either in the form of diffusion inversion or cross-frame attention.

In this paper, the authors analyze such inefficiencies and suggest simple yet effective modifications that allow significant speed-ups whilst maintaining quality. Moreover, they introduce Object-Centric Diffusion, coined as OCD, to further reduce latency by allocating computations more toward foreground-edited regions that are arguably more important for perceptual quality.

They achieve this by two novel proposals:

  1. Object-centric sampling, decoupling the diffusion steps spent on salient regions or backgrounds, allocates most of the model capacity to the former.
  2. Object-centric 3D Token Merging, which reduces the cost of cross-frame attention by fusing redundant tokens in unimportant background regions.

Both techniques are readily applicable to a given video editing model without retraining, and can drastically reduce its memory and computational cost. We evaluate our proposals on inversion-based and control-signal-based editing pipelines and show a latency reduction of up to 10x for a comparable synthesis quality.

1.2. Diffusion Priors for Dynamic View Synthesis from Monocular Videos

Dynamic novel view synthesis aims to capture the temporal evolution of visual content within videos. Existing methods struggle to distinguish between motion and structure, particularly in scenarios where camera poses are either unknown or constrained compared to object motion.

Furthermore, with information solely from reference images, it is extremely challenging to hallucinate unseen regions that are occluded or partially observed in the given videos. To address these issues, we first finetune a pretrained RGB-D diffusion model on the video frames using a customization technique. Subsequently, we distill the knowledge from the finetuned model into a 4D representation encompassing dynamic and static Neural Radiance Fields (NeRF) components.

The proposed pipeline achieves geometric consistency while preserving the scene identity. We perform thorough experiments to evaluate the efficacy of the proposed method qualitatively and quantitatively. The results demonstrate the robustness and utility of our approach in challenging cases, further advancing dynamic novel view synthesis.

1.3. FADI-AEC: Fast Score-Based Diffusion Model Guided by Far-end Signal for Acoustic Echo Cancellation

Despite the potential of diffusion models in speech enhancement, their deployment in Acoustic Echo Cancellation (AEC) has been restricted. In this paper, the authors propose DI-AEC, pioneering a diffusion-based stochastic regeneration approach dedicated to AEC.

Further, the authors propose FADI-AEC, a fast score-based diffusion AEC framework to save computational demands, making it favorable for edge devices.

It stands out by running the score model once per frame, achieving a significant surge in processing efficiency. Apart from that, the authors introduce a novel noise generation technique where far-end signals are utilized, incorporating both far-end and near-end signals to refine the score model’s accuracy.

They test our proposed method on the ICASSP2023 Microsoft deep echo cancellation challenge evaluation dataset, where the method outperforms some of the end-to-end methods and other diffusion-based echo cancellation methods.

2. Vision Language Models

2.1. PALP: Prompt Aligned Personalization of Text-to-Image Models

Content creators often aim to create personalized images using personal subjects that go beyond the capabilities of conventional text-to-image models.

Additionally, they may want the resulting image to encompass a specific location, style, ambiance, and more. Existing personalization methods may compromise personalization ability or the alignment to complex textual prompts.

This trade-off can impede the fulfillment of user prompts and subject fidelity. The authors propose a new approach focusing on personalization methods for a single prompt to address this issue. They term their approach prompt-aligned personalization. While this may seem restrictive, our method excels in improving text alignment, enabling the creation of images with complex and intricate prompts, which may pose a challenge for current techniques.

In particular, this method keeps the personalized model aligned with a target prompt using an additional score distillation sampling term. They demonstrate the versatility of the method in multi- and single-shot settings and further show that it can compose multiple subjects or use inspiration from reference images, such as artworks. They compare our approach quantitatively and qualitatively with existing baselines and state-of-the-art techniques.

2.2. Distilling Vision-Language Models on Millions of Videos

The recent advance in vision-language models is largely attributed to the abundance of image-text data. We aim to replicate this success for video-language models, but there simply is not enough human-curated video-text data. We thus resort to fine-tuning a video-language model from a strong image-language baseline with synthesized instructional data.

The resulting video-language model is then used to auto-label millions of videos to generate high-quality captions. We show the adapted video-language model performs well on a wide range of video-language benchmarks. For instance, it surpasses the best prior result on open-ended NExT-QA by 2.8%. Besides, our model generates detailed descriptions for previously unseen videos, which provide better textual supervision than existing methods.

Experiments show that a video-language dual-encoder model contrastively trained on these auto-generated captions is 3.8% better than the strongest baseline that also leverages vision-language models. Our best model outperforms state-of-the-art methods on MSR-VTT zero-shot text-to-video retrieval by 6%.

3. Image Generation & Editing

3.1. GPT-4V(ision) is a Human-Aligned Evaluator for Text-to-3D Generation

Despite recent advances in text-to-3D generative methods, there is a notable absence of reliable evaluation metrics. Existing metrics usually focus on a single criterion each, such as how well the asset aligned with the input text.

These metrics lack the flexibility to generalize to different evaluation criteria and might not align well with human preferences. Conducting user preference studies is an alternative that offers both adaptability and human-aligned results.

User studies, however, can be very expensive to scale. This paper presents an automatic, versatile, and human-aligned evaluation metric for text-to-3D generative models. To this end, we first develop a prompt generator using GPT-4V to generate evaluating prompts, which serve as input to compare text-to-3D models.

We further design a method instructing GPT-4V to compare two 3D assets according to user-defined criteria. Finally, we use these pairwise comparison results to assign these models Elo ratings. Experimental results suggest our metric strongly aligns with human preference across different evaluation criteria.

3.2. AGG: Amortized Generative 3D Gaussians for Single Image to 3D

Given the growing need for automatic 3D content creation pipelines, various 3D representations have been studied to generate 3D objects from a single image. Due to its superior rendering efficiency, 3D Gaussian splatting-based models have recently excelled in both 3D reconstruction and generation.

3D Gaussian splatting approaches for image to 3D generation are often optimization-based, requiring many computationally expensive score-distillation steps. To overcome these challenges, we introduce an Amortized Generative 3D Gaussian framework (AGG) that instantly produces 3D Gaussians from a single image, eliminating the need for per-instance optimization.

Utilizing an intermediate hybrid representation, AGG decomposes the generation of 3D Gaussian locations and other appearance attributes for joint optimization. Moreover, we propose a cascaded pipeline that first generates a coarse representation of the 3D data and later upsamples it with a 3D Gaussian super-resolution module.

Our method is evaluated against existing optimization-based 3D Gaussian frameworks and sampling-based pipelines utilizing other 3D representations, where AGG showcases competitive generation abilities both qualitatively and quantitatively while being several orders of magnitude faster.

3.3. PIXART-δ: Fast and Controllable Image Generation with Latent Consistency Models

This technical report introduces PIXART-{\delta}, a text-to-image synthesis framework that integrates the Latent Consistency Model (LCM) and ControlNet into the advanced PIXART-{\alpha} model.

PIXART-{\alpha} is recognized for its ability to generate high-quality images of 1024px resolution through a remarkably efficient training process. The integration of LCM in PIXART-{\delta} significantly accelerates the inference speed, enabling the production of high-quality images in just 2–4 steps.

Notably, PIXART-{\delta} achieves a breakthrough of 0.5 seconds for generating 1024x1024 pixel images, marking a 7x improvement over the PIXART-{\alpha}.

Additionally, PIXART-{\delta} is designed to be efficiently trainable on 32GB V100 GPUs within a single day. With its 8-bit inference capability (von Platen et al., 2023), PIXART-{\delta} can synthesize 1024px images within 8GB GPU memory constraints, greatly enhancing its usability and accessibility.

Furthermore, incorporating a ControlNet-like module enables fine-grained control over text-to-image diffusion models. We introduce a novel ControlNet-Transformer architecture, specifically tailored for Transformers, achieving explicit controllability alongside high-quality image generation.

As a state-of-the-art, open-source image generation model, PIXART-{\delta} offers a promising alternative to the Stable Diffusion family of models, contributing significantly to text-to-image synthesis.

3.4. Let’s Go Shopping (LGS) — Web-Scale Image-Text Dataset for Visual Concept Understanding

Vision and vision-language applications of neural networks, such as image classification and captioning, rely on large-scale annotated datasets that require non-trivial data-collecting processes. This time-consuming endeavor hinders the emergence of large-scale datasets, limiting researchers and practitioners to a small number of choices.

Therefore, we seek more efficient ways to collect and annotate images. Previous initiatives have gathered captions from HTML alt-texts and crawled social media postings, but these data sources suffer from noise, sparsity, or subjectivity. For this reason, we turn to commercial shopping websites whose data meet three criteria: cleanliness, informativeness, and fluency.

We introduce the Let’s Go Shopping (LGS) dataset, a large-scale public dataset with 15 million image-caption pairs from publicly available e-commerce websites. When compared with existing general-domain datasets, the LGS images focus on the foreground object and have less complex backgrounds.

Our experiments on LGS show that the classifiers trained on existing benchmark datasets do not readily generalize to e-commerce data, while specific self-supervised visual feature extractors can better generalize.

Furthermore, LGS’s high-quality e-commerce-focused images and bimodal nature make it advantageous for vision-language bi-modal tasks: LGS enables image-captioning models to generate richer captions and helps text-to-image generation models achieve e-commerce style transfer.

4. Video Generation & Editing

4.1. ANIM-400K: A Large-Scale Dataset for Automated End-To-End Dubbing of Video

The Internet’s wealth of content, with up to 60% published in English, starkly contrasts the global population, where only 18.8% are English speakers, and just 5.1% consider it their native language, leading to disparities in online information access.

Unfortunately, automated processes for dubbing video — replacing the audio track of a video with a translated alternative — remain a complex and challenging task due to pipelines, necessitating precise timing, facial movement synchronization, and prosody matching.

While end-to-end dubbing offers a solution, data scarcity continues to impede the progress of both end-to-end and pipeline-based methods. In this work, we introduce Anim-400K, a comprehensive dataset of over 425K aligned animated video segments in Japanese and English supporting various video-related tasks, including automated dubbing, simultaneous translation, guided video summarization, and genre/theme/style classification.

4.2. MagicVideo-V2: Multi-Stage High-Aesthetic Video Generation

The growing demand for high-fidelity video generation from textual descriptions has catalyzed significant research in this field. In this work, we introduce MagicVideo-V2 which integrates the text-to-image model, video motion generator, reference image embedding module, and frame interpolation module into an end-to-end video generation pipeline.

Benefiting from these architectural designs, MagicVideo-V2 can generate an aesthetically pleasing, high-resolution video with remarkable fidelity and smoothness. It demonstrates superior performance over leading Text-to-Video systems such as Runway, Pika 1.0, Morph, Moon Valley, and Stable Video Diffusion model via user evaluation at large scale.

4.3. Jump Cut Smoothing for Talking Heads

A jump cut offers an abrupt, sometimes unwanted change in the viewing experience. We present a novel framework for smoothing these jump cuts, in the context of talking head videos.

We leverage the appearance of the subject from the other source frames in the video, fusing it with a mid-level representation driven by DensePose key points and face landmarks. To achieve motion, we interpolate the key points and landmarks between the end frames around the cut. We then use an image translation network from the keypoints and source frames, to synthesize pixels.

Because key points can contain errors, we propose a cross-modal attention scheme to select and pick the most appropriate source amongst multiple options for each key point. By leveraging this mid-level representation, our method can achieve stronger results than a strong video interpolation baseline.

We demonstrate our method on various jump cuts in the talking head videos, such as cutting filler words, pauses, and even random cuts. Our experiments show that we can achieve seamless transitions, even in challenging cases where the talking head rotates or moves drastically in the jump cut.

4.4. URHand: Universal Relightable Hands

Existing photorealistic relightable hand models require extensive identity-specific observations in different views, poses, and illuminations, and face challenges in generalizing to natural illuminations and novel identities.

To bridge this gap, we present URHand, the first universal relightable hand model that generalizes viewpoints, poses, illuminations, and identities. Our model allows few-shot personalization using images captured with a mobile phone and is ready to be photorealistically rendered under novel illuminations.

To simplify the personalization process while retaining photorealism, we build a powerful universal relightable prior based on neural relighting from multi-view images of hands captured in a light stage with hundreds of identities.

The key challenge is scaling the cross-identity training while maintaining personalized fidelity and sharp details without compromising generalization under natural illuminations. To this end, we propose a spatially varying linear lighting model as the neural renderer that takes physics-inspired shading as an input feature.

By removing non-linear activations and bias, our specifically designed lighting model explicitly keeps the linearity of light transport. This enables single-stage training from light-stage data while generalizing to real-time rendering under arbitrary continuous illuminations across diverse identities.

In addition, we introduce the joint learning of a physically based model and our neural relighting model, which further improves fidelity and generalization. Extensive experiments show that our approach achieves superior performance over existing methods in terms of both quality and generalizability. We also demonstrate quick personalization of URHand from a short phone scan of an unseen identity.

5. Image Recognition

5.1. Denoising Vision Transformers

We delve into a nuanced but significant challenge inherent to Vision Transformers (ViTs): feature maps of these models exhibit grid-like artifacts, which detrimentally hurt the performance of ViTs in downstream tasks. Our investigations trace this fundamental issue down to the positional embeddings at the input stage.

To address this, we propose a novel noise model, which is universally applicable to all ViTs. Specifically, the noise model dissects ViT outputs into three components: a semantics term free from noise artifacts and two artifact-related terms that are conditioned on pixel locations. Such a decomposition is achieved by enforcing cross-view feature consistency with neural fields on a per-image basis.

This per-image optimization process extracts artifact-free features from raw ViT outputs, providing clean features for offline applications. Expanding the scope of our solution to support online functionality, we introduce a learnable denoiser to predict artifact-free features directly from unprocessed ViT outputs, which shows remarkable generalization capabilities to novel data without the need for per-image optimization. Our two-stage approach, termed Denoising Vision Transformers (DVT), does not require re-training existing pre-trained ViTs and is immediately applicable to any Transformer-based architecture.

We evaluate our method on a variety of representative ViTs (DINO, MAE, DeiT-III, EVA02, CLIP, DINOv2, DINOv2-reg). Extensive evaluations demonstrate that our DVT consistently and significantly improves existing state-of-the-art general-purpose models in semantic and geometric tasks across multiple datasets (e.g., +3.84 mIoU). We hope our study will encourage a re-evaluation of ViT design, especially regarding the naive use of positional embeddings.

5.2. Open-Vocabulary SAM: Segment and Recognize Twenty-thousand Classes Interactively

The CLIP and Segment Anything Model (SAM) are remarkable vision foundation models (VFMs). SAM excels in segmentation tasks across diverse domains, while CLIP is renowned for its zero-shot recognition capabilities. This paper presents an in-depth exploration of integrating these two models into a unified framework.

Specifically, we introduce the Open-Vocabulary SAM, a SAM-inspired model designed for simultaneous interactive segmentation and recognition, leveraging two unique knowledge transfer modules: SAM2CLIP and CLIP2SAM. The former adapts SAM’s knowledge into the CLIP via distillation and learnable transformer adapters, while the latter transfers CLIP knowledge into SAM, enhancing its recognition capabilities.

Extensive experiments on various datasets and detectors show the effectiveness of Open-Vocabulary SAM in both segmentation and recognition tasks, significantly outperforming the naive baselines of simply combining SAM and CLIP. Furthermore, aided with image classification data training, our method can segment and recognize approximately 22,000 classes.

If you like the article and would like to support me, make sure to:

Subscribe to my newsletter To Data & Beyond to get full and early access to my articles:

Are you looking to start a career in data science and AI and do not know how? I offer data science mentoring sessions and long-term career mentoring:

Data Science
Machine Learning
Deep Learning
Computer Vision
AI
Recommended from ReadMedium