Skip to content

MPEG 150

MPEG 150 took place as an online conference from 2025-03-31 until 2025-04-04.

Press Release

At its 150th meeting, MPEG promoted three standards (among others) to Final Draft International Standard (FDIS), driving innovation in immersive audio and video coding:

  • MPEG-H 3D audio
  • MPEG-H 3D audio reference software
  • MPEG-I Immersive audio reference software

MPEG issues Call for Proposals for Audio Coding for Machines (ACoM)

At the 150th MPEG meeting, MPEG Technical Requirements (WG 2) issued a Call for Proposals (CfP) for technologies and solutions enabling efficient compression for audio coding for machines. Traditional coding methods aim for the best audio under certain bit-rate constraint for human consumption. MPEG-ACoM aims to define a bitstream and data format for compressing audio, multi-dimensional streams, or features extracted from such signals that is efficient in terms of bitrate/size and can be used for machine analysis tasks or hybrid machine and human consumption. In addition, such a data format can be used to transport recorded audio data from sensor networks to machine listening units. Applications for machine listening are in industrial, surveillance, control, medical, and multimedia applications. This call focuses on lossless audio coding enabling the use of the same compression scheme for all applications. This CfP welcomes submissions of proposals from companies and other organizations. Registration is required by the 1st of August 2025; the submission of bitstream files, results, and decoder packages is required by the 1st of September 2025; and the submission of proponent documentation is due by the 26th of September 2025. Evaluation of the submissions in response to the CfP will be performed at the 152th MPEG meeting in October 2025.

The CfP and related documents are available at https://www.mpeg.org/.

MPEG ratifies the 4thedition of MPEG-H 3D audio standard and reference software

At the 150th MPEG meeting in April 2025, the fourth edition of ISO/IEC 23008-3, MPEG-H 3D audio, and the fourth edition of ISO/IEC 23008-6, MPEG-H 3D audio reference software, progressed to Final Draft International Standard (FDIS).

The changes in the new edition of ISO/IEC 23008-3 are solely editorial – the previously standardized functionality remains unchanged. Normative references were corrected and updated, literal and formatting errors were fixed. Textual changes regarding random access points, configuration changes, bitrate adaption, hybrid delivery, IGF, DRC, and loudness handling aim to increase comprehensibility.

Likewise, the technical scope of the fourth edition of ISO/IEC 23008-6 was not changed. Software fixes were introduced, and the latest version of the MPEG-D DRC reference software was integrated. Furthermore, an improved checkout mechanism facilitates setting up the reference software workspace.

The fourth editions of ISO/IEC 23008-3 and ISO/IEC 23008-6 enhance the overall usability of MPEG-H 3D audio.

MPEG ratifies MPEG-I Immersive Audio Reference Software

At the 150th MPEG meeting, MPEG Audio Coding (WG 6) promoted ISO/IEC 23090-34 Immersive audio reference software to Final Draft International Standard (FDIS), marking a major milestone in the development of next-generation audio technology.

ISO/IEC 23090-34 Immersive audio reference software is a complete implementation of MPEG-I immersive audio (ISO/IEC 23090-4) in a real-time framework. The technology is a groundbreaking standard for compact and highly realistic representation and auralization of spatial sound, enabling interoperability across different platforms and ecosystems. Tailored for metaverse applications, including Virtual, Augmented, and Mixed Reality (VR/AR/MR), it enables real-time rendering and streaming of interactive 3D audio with six degrees of freedom (6DoF). Users can not only turn their heads in any direction (pitch/yaw/roll) but also move freely through virtual acoustic environments (x/y/z), creating an unparalleled sense of immersion.

The software package contains an implementation of an encoder and a renderer delivering either binaural headphone or multi-channel loudspeaker output, that can serve as the basis for building interactive, fully immersive experiences.

With MPEG-I immersive audio, MPEG continues to set the standard for the future of interactive spatial audio, paving the way for novel immersive digital experiences.

MPEG promotes Video Coding for Machines to Committee Draft

At the 150th MPEG meeting, MPEG Video coding (WG 4) promoted ISO/IEC 23888-2 Video coding for machines to Committee Draft (CD), the initial stage of standard development.

This emerging standard defines a video compressions standard optimized for consumption of video by a machine task network, specifying a pipeline of processing stages to transform video prior to the machine task network. It supports efficient compression by transforming the video in ways that, although impacting subjective quality, do not degrade machine task performance. The emerging standard reuses existing core coding technology to decode a coded video prior to the pipeline of processing stages, which perform transformations spatially, temporally, and in amplitude prior to the machine task network.

The standard is planned to be completed, i.e., to reach the status of Final Draft International Standard (FDIS), by April 2026.

Support for new SEI messages in AVC

At its 150th meeting, JVET (SC 29 WG 5) issued a Committee Draft Amendment (CDAM) for the Advanced Video Coding standard (AVC, ISO/IEC 14496-10 | ITU-T Rec. H.264), which will enable using new SEI messages currently under development for the next version of the Versatile Supplemental Enhancement Messages standard (VSEI, ISO/IEC 23002-7 | ITU-T Rec. H.274). This will include extensions of the neural network-based post filtering SEI, as well as new messages for source picture timing signalling, signalling of modalities such as infrared or X-ray video, a set of messages supporting the addition of digital signatures to video streams, signalling of AI usage restrictions, encoder optimization information, packed regions information, processing order and nesting in case of concatenated processing of several SEI messages, and a messages enabling generative face synthesis from a video stream. Finalization of this amendment is planned early in the year 2026.

Work towards next edition of VVC conformance

Availability of conformance bitstreams helps implementers to verify compliance of decoding devices to the specification of a video compression standard. At its 150th meeting, JVET (SC 29 WG 5) issued a Committee Draft (CD) for the third edition of Conformance Testing for Versatile Video Coding (ISO/IEC 23090-15 | ITU-T Rec. H.266.2), which adds new bitstreams for testing multi-layer functionality, and makes corrections to some bitstreams of the existing set. Finalization of this standard is planned early in the year 2026.

Draft of Joint Call for Evidence on video compression with capability beyond VVC

At its 150th meeting, JVET (SC 29 WG 5) issued a draft text of a Joint Call for Evidence (CfE, on behalf of its SC 29 and ITU-T SG21 parent bodies) as a public document. The CfE requests information regarding the existence of video compression technology that has compression performance or additional functionality beyond that of VVC, where the tradeoff in terms of encoder and decoder implementation cost is also an important criterion. Further, a first draft of use cases and requirements for a potential next-generation video coding standard is being issued by the parent bodies. The CfE is planned to be finalized during the 151st JVET meeting in July, and submissions are expected to be received by the time of the 152nd meeting in October 2025. At that meeting, also numerous subjective-quality testing activities are planned to be conducted, using a set of video test materials which also includes novel categories such as gaming content and user-generated content.

MPEG advances standards in 3D Graphics, Haptics, and Internet of Media Things

At its 150th meeting, MPEG Working Group 7 (WG 7) marked significant progress by promoting a series of new and revised standards to Committee Draft (CD) and Final Draft International Standard (FDIS) status. These developments contribute to MPEG’s mission to provide interoperable, efficient, and scalable solutions across immersive media domains.

Point Cloud Compression

Point cloud compression is a foundational technology for applications such as immersive communication, autonomous driving, cultural heritage digitization, and gaming. MPEG WG 7 continues to refine and diversify its point cloud compression standards to meet industry-specific demands.

  • ISO/IEC FDIS 23090-19 – Video-based Point Cloud Compression (V-PCC) Reference Software, 2nd Edition

This edition updates the reference software for the widely adopted V-PCC standard, which is optimized for dense point clouds and leverages mature video compression techniques. The V-PCC approach enables efficient streaming and rendering of dynamic 3D content over existing video pipelines, offering broad compatibility and performance across platforms.

  • ISO/IEC CD 23090-38 – Enhanced Geometry-based Point Cloud Compression (E-G-PCC)

E-G-PCC introduces major enhancements over the original Geometry-based Point Cloud Compression (G-PCC), including temporal prediction and optimized static point cloud coding, addressing both performance and flexibility. This new standard enables improved compression for dynamic content while remaining versatile across dense, sparse, or LiDAR-derived point clouds. Although not backward compatible with G-PCC, it offers a self-contained and powerful alternative for advanced use cases.

  • ISO/IEC CD 23090-35 – Low Complexity, Low Latency LiDAR Coding (L3C2) Reference Software and Conformance

L3C2 targets real-time compression needs in systems using spinning LiDAR, such as robotics and autonomous navigation. It exploits the acquisition order of LiDAR scans to reduce encoding complexity and latency, making it ideal for low-power or embedded devices requiring efficient 3D data exchange.

Together, these standards form a cohesive suite addressing dense, sparse, and application-specific point cloud scenarios, ensuring optimized performance for industries spanning automotive, VR/AR, smart cities, and more.

Mesh Compression

  • ISO/IEC CD 23090-36 – Video-based Dynamic Mesh Compression (V-DMC) Conformance and Reference Software

V-DMC addresses the growing demand for efficient representation of animated 3D meshes in immersive media and volumetric video. It leverages video codecs for temporal compression of dynamic mesh sequences, enabling bandwidth-efficient transmission of complex 3D content. The availability of conformance and reference software facilitates adoption by ensuring interoperability and performance consistency across implementations.

Haptics Coding

  • ISO/IEC FDIS 23090-33 – Conformance and Reference Software for Haptics Coding

Following the publication of ISO/IEC 23090-31:2025, the core Haptics Coding standard, the new FDIS 23090-33 provides reference software and conformance testing, enabling industry stakeholders to develop interoperable, high-performance solutions. The standard supports both parametric and signal-based representations and covers a broad range of haptic modalities—including vibrotactile, kinesthetic, tactile, and electrotactile feedback.

Designed to be device-independent, the MPEG haptics framework allows content creators to deliver touch-based experiences that adapt to the capabilities of end-user hardware. This is essential for scalable deployment across domains such as virtual/augmented reality, gaming, remote collaboration, and communication. Reference software is publicly available on GitHub – MPEGGroup/HapticReferenceSoftware, ensuring transparency and ease of integration.

Scene Media Representations

  • ISO/IEC FDIS 23090-28 – Interchangeable Scene-based Media Representations

This standard enables the interchange of complex 3D scenes, supporting hybrid representations that combine media elements such as meshes, point clouds, videos, and haptics. ISO/IEC 23090-28 ensures that immersive environments—ranging from digital twins to interactive entertainment—can be composed, transmitted, and rendered with consistency across platforms. The standard plays a critical role in enabling open ecosystems for immersive applications.

Internet of Media Things (IoMT)

At the 150th meeting, WG 7 achieved a major milestone by advancing five specifications in the Internet of Media Things (IoMT) family, which provides a foundational architecture for scalable, distributed processing among smart media devices (“media things”).

  • ISO/IEC 23093-1 (3rd Edition) – Architecture
    This edition extends the core architecture with new use cases and detailed sequence diagrams, improving readability and guiding implementation in diverse scenarios such as smart environments and collaborative media workflows.
  • ISO/IEC 23093-2 (3rd Edition) – Discovery and Communication API
    This revision enhances service discovery and device communication mechanisms, further simplifying integration in heterogeneous networks.
  • ISO/IEC 23093-3 (3rd Edition) – Media Data Formats and APIs
    This edition broadens support for various media types and introduces improved data exchange methods tailored for constrained devices.
  • ISO/IEC 23093-5 – IoMT Autonomous Collaboration (New Standard)
    Specifies APIs and data formats for orchestrating autonomous missions among interconnected media things and between devices and system managers. This is pivotal for use cases like coordinated drone swarms, industrial robots, and surveillance systems.
  • ISO/IEC FDIS 23093-6 – Media Data Formats and APIs for Distributed AI Processing
    This new standard provides semantic definitions and APIs for data exchanged among media analyzers, supporting distributed artificial intelligence in networked media applications.

These advancements cement IoMT as a forward-looking framework for intelligent, interoperable, and media-centric device ecosystems, enabling real-time collaboration, adaptive content management, and distributed AI.

MPEG extends the capability of the MPEG-G standard series to represent and use graph references genomes

At the 150th MPEG meeting, MPEG Genomic Coding (WG 8) started two amendments of ISO/IEC 23092 Part 1 “Transport and storage of genomic information” and Part 2 “Coding of genomic annotations” with the objective of extending the capabilities of the MPEG-G series to handle reference genomes in the form of graphs. It is well recognized that genomes in the form of graphs better capture the commonalities and differences among genomes from organisms of the same species than somehow arbitrary “average” genomes. However, the usage of graph genomes has been so far limited from the potentially exponential complexity of aligning and indexing DNA labeled graphs. After about two years of developments of the technologies received as answer to the public call for technology issued by MPEG, WG 8 has shown that both alignment and indexing of DNA labeled graphs of arbitrary topology can be reduced to linear complexity, thus making graph genome compression and the compression of sequence read with a graph reference, feasible and efficient. Core experiments carried out have shown that it is possible to handle orders of magnitude much larger graph than the ones currently built in the state of the art. The roadmap of MPEG-G series is two publish the new extended standards providing full graph support by the beginning of 2027.

Output documents published in MPEG 150

MPEG-I

#PartTitle
10Carriage of Visual Volumetric Video-based Coding DataWD of ISO/IEC 23090-10 2nd edition Carriage of visual volumetric video-based coding data
10Carriage of Visual Volumetric Video-based Coding DataTechnologies under consideration for ISO/IEC 23090-10
12Immersive VideoCommon test conditions for MPEG immersive video
12Immersive VideoEncoder guidelines for MPEG immersive video
13Video Decoding Interface for Immersive MediaWD of ISO/IEC 23090-13 2nd edition Video Decoding Interface for Immersive Media
13Video Decoding Interface for Immersive MediaTechnologies under Consideration on Video Decoding Interface for Immersive Media
14Scene Description for MPEG MediaTechnologies under Considerations on Scene Descriptions
14Scene Description for MPEG MediaProcedures for standard development for ISO/IEC 23090-14 (MPEG-I Scene Description)
17Reference Software and Conformance for OMAFWD of Reference software and conformance for omnidirectional media format (OMAF) 2nd edition
24Conformance and Reference Software for Scene Description for MPEG MediaPotential improvement of ISO/IEC 23090-24 AMD 1 Conformance and reference software for scene description on haptics, augmented reality, avatar integration, interactivity and lighting
24Conformance and Reference Software for Scene Description for MPEG MediaProcedures for test scenarios and reference software development for MPEG-I Scene Description
29Video-based dynamic mesh codingWhite paper on Video-based Dynamic Mesh Coding
33Conformance and reference software for haptics codingCRM5.1 Reference software
37Conformance and reference software for carriage of haptics dataText of ISO/IEC CD 23090-37 Conformance and reference software for carriage of haptics data
39Avatar representation formatsProcedures and Test Formats for Avatar Representation Formats
39Avatar representation formatsPotential improvement of ISO/IEC 23090-39 Avatar representation formats
39Avatar representation formatsEE Description for Avatar representation formats

MPEG-DASH

#PartTitle
1Media Presentation Description and Segment FormatsTechnologies under Consideration for DASH
7Delivery of CMAF content with DASHWD of ISO/IEC TR 23009-7 Delivery of CMAF content with DASH

MPEG-H

#PartTitle
12Image File FormatTechnology under Consideration on ISO/IEC 23008-12
12Image File FormatPotential improvement of ISO/IEC 23008-12 3rd edition AMD 2 Low-overhead image file format

MPEG-4

#PartTitle
12ISO base Media File FormatTechnologies under Consideration for ISO/IEC 14496-12 (ISOBMFF)
12ISO base Media File FormatWD of ISO/IEC 14496-12 8th edition AMD 3 Carriage of depth and alpha
12ISO base Media File FormatRevised text of ISO/IEC CD 14496-12 8th edition AMD 2 Tools for enhanced CMAF and DASH integration
14MP4 File FormatTechnologies under Consideration for ISO/IEC 14496-14
15Carriage of Network Abstraction Layer (NAL) Unit Structured Video in the ISO base Media File FormatWD of ISO/IEC 14496-15 7th edition AMD 2 Improvement of carriage of L-HEVC
15Carriage of Network Abstraction Layer (NAL) Unit Structured Video in the ISO base Media File FormatTechnologies under Consideration for ISO/IEC 14496-15
32File Format ReferenceWD of ISO/IEC 14496-32 3rd edition File format reference software and conformance
34Syntactic description languageWorkplan for the reference software and conformance for ISO/IEC 14496-34
34Syntactic description languageTechnology under Consideration on ISO/IEC 14496-34 Syntactic Description Language

MPEG-2

#PartTitle
1SystemsWD of ISO/IEC 13818-1 10th edition AMD 1 Improvement for the transport of MPEG-Green data

MPEG-B

#PartTitle
7Common Encryption in ISO Base Media File Format FilesText of ISO/IEC 23001-7:2023 CDAM 1 AES-256 Support
7Common Encryption in ISO Base Media File Format FilesTechnologies under Consideration for ISO/IEC 23001-7
7Common Encryption in ISO Base Media File Format FilesExploration on De-CENC related issue
10Carriage of Timed Metadata Metrics of Media in ISO Base Media File FormatTechnologies under Consideration for ISO/IEC 23001-10 Carriage of timed metadata metrics of media in ISOBMFF
10Carriage of Timed Metadata Metrics of Media in ISO Base Media File FormatWD of ISO/IEC 23001-10 AMD 3 Additional quality metrics
19Carriage of green metadataPotential improvement of ISO/IEC 23001-19 Carriage of green metadata

MPEG-A

#PartTitle
19Common Media Application Format (CMAF) for Segmented MediaTechnology under Consideration on CMAF
23Decentralized media rights application formatWorking Draft of ISO/IEC 23000-23 Decentralized media rights application format
24Messaging media application formatWorking Draft of ISO/IEC 23000-24 Messaging media application format

Explorations

#PartTitle
41Enhanced compression beyond VVC capabilityDraft Joint Call for Evidence on video compression with capability beyond VVC
45Gaussian splat codingDraft Gaussian Splat Coding use cases
45Gaussian splat codingDraft Gaussian Splat Coding requirements
46Audio Coding for MachinesCall for proposals on audio coding for machines
46Audio Coding for MachinesList of content available for CfP of audio coding for machines
46Audio Coding for MachinesMetadata format for audio coding for machines
46Audio Coding for MachinesUse cases and requirements on audio coding for machines
47Metadata Definition and Carriage for Split RenderingExploration on application specific metadata definition and carriage in media AUs
48Indicating AI generated/altered content using the MPEG Systems technologiesExploration on media authenticity and provenance indication with the MPEG Systems technologies

All

#PartTitle
Draft of use cases and requirements for potential next-generation video coding standard beyond VVC capability

MPEG-AI

#PartTitle
Strategy on MPEG-AI
2Video coding for machinesCommon test conditions for video coding for machines
4Feature coding for machinesCommon test and training conditions for FCM

Other documents published in MPEG 150

TypeTitle
Press ReleasePress Release of 19th Meeting
AhGList of the AHGs established at the 19th SC 29/WG 03 Meeting (MPEG 150)
AhGWG2 AHGs established at the 19th WG2 meeting (MPEG 150)
OutputWhite paper on MPEG Technologies for Metaverse
OutputAssets of communication
Time lineWG 6 Standards, Timeline and Workplan
Administrative MattersMeeting Notice of the 151st MPEG meeting including the 20th meeting of SC29/AG2,3,5, WG2,3,4,5,6,7,8
Administrative MattersRequest for offers to host an MPEG meeting (MPEG 157 - MPEG 160)
ManagementMPEG Roadmap after the MPEG 150 meeting
ManagementMPEG Roadmap after the MPEG 150 meeting (extended PPT)