AVC (Advanced Video Coding), also called H.264 or MPEG-4 Part 10, Advanced Video Coding ( MPEG-4 AVC ). It is a video coding standard designed to achieve a high compression ratio of a video stream while maintaining high quality. It is by far the most commonly used format for recording, compressing, and distributing video content.
The H.264 (AVC) video format has a very wide range of applications. It converts all forms of digitally compressed video. From low-bit-rate Internet streaming applications to HDTV broadcast applications and virtually lossless digital cinema applications. It is reported that when using H.264, the bit rate is reduced by 50% or more compared to MPEG-2 Part 2 . For example, H.264 provide the same digital satellite TV quality as current MPEG-2 implementations at less than half the bitrate, with current MPEG-2 implementations running at around 3.5 Mbps, while H. 264 – from only 1.5. Mbps. Sony claims 9Mbps AVC recording mode is equivalent to HDV image quality which uses approximately 18-25 Mbps.
AVC Derived formats
AVCHD is a high-definition recording format developed by Sony and Panasonic that uses H.264 (conforms to H.264 with additional features and application-specific restrictions added).
AVC-Intra is an intra-frame-only compression format developed by Panasonic.
XAVC is a recording format developed by Sony that uses H.264 / MPEG-4 AVC level 5.2. Which is the highest level supported by this video standard.
XAVC can support 4K resolution (4096×2160 and 3840×2160) at up to 60 frames per second (fps). Sony announced that cameras that support XAVC include two CineAlta cameras. The Sony PMW-F55 and the Sony PMW-F5. The Sony PMW-F55 can record XAVC in 4K @ 30fps @ 300Mbps and 2K @ 30fps @ 100Mbps. XAVC can record 4K resolution at 60 fps with 4: 2: 2 chroma sampling at 600 Mbps.
AVC (Advanced Video Coding) Functions
H.264 / AVC / MPEG-4 Part 10 contains several new features that allow video compression much more efficiently than older standards and provide greater flexibility for use in a wide variety of network environments. In particular, some of these key features include.
The use of previously encoded images as references is much more flexible than in past standards. Allowing in some cases up to 16 reference frames.
Profiles that support non- IDR frames specify at most levels that sufficient buffering must be available to allow at least 4 or 5 anchor frames to be used at maximum resolution.
This is in contrast to previous standards, where the limit was usually one; or, in the case of regular “ B-pictures ” (B-frames), two.
Variable Block Size Motion Compensation (VBSMC) with block sizes ranging from 16 × 16 to 4 × 4 to provide accurate segmentation of moving regions. Supported luma prediction block sizes include 16×16, 16×8, 8×16, 8×8, 8×4, 4×8, and 4×4. Many of which can be used together in a single macroblock. Accordingly, the chroma prediction block sizes are smaller when chroma downsampling is used.
Ability to use multiple motion vectors for each macroblock (one or two per partition) up to 32 in the case of macroblock B, consisting of 16 4 × 4 partitions. The motion vectors for each 8 × 8 or more partition area may point to different reference pictures.
The ability to use any type of macroblock in B-frames, including I-macroblocks, results in much more efficient encoding when using B-frames. This feature has been removed from MPEG-4 ASP.
Six-pin filtering for predictions of half-pixel luminance sampling for sharper sub-pixel motion compensation. Quarter-pixel motion is obtained by linear interpolation of half-pixel values to save processing power.
Quarter-pixel precision motion compensation to accurately describe the displacement of moving areas. For chroma, the resolution is typically halved both vertically and horizontally.