top of page

Novice Karate Group (ages 8 & up)

Публічна·39 учасників

Video Coding Standards: AVS China, H.264 MPEG-4... _VERIFIED_

Includes block diagrams, figures, tables, and graphs, this book helps the reader understand the concepts, key techniques and performances of AVS China and other video coding standards readily and intuitively

Video coding standards: AVS China, H.264 MPEG-4...

The requirements for multimedia (especially video and audio) communications increase rapidly in the last two decades in broad areas such as television, entertainment, interactive services, telecommunications, conference, medicine, security, business, traffic, defense and banking. Video and audio coding standards play most important roles in multimedia communications. In order to meet these requirements, series of video and audio coding standards have been developed such as MPEG-2, MPEG-4, MPEG-21 for audio and video by ISO/IEC, H.26x for video and G.72x for audio by ITU-T, Video Coder 1 (VC-1) for video by the Society of Motion Picture and Television Engineers (SMPTE) and RealVideo (RV) 9 for video by Real Networks.

AVS China is the abbreviation for Audio Video Coding Standard of China. This new standard includes four main technical areas, which are systems, video, audio and digital copyright management (DRM), and some supporting documents such as consistency verification. The second part of the standard known as AVS1-P2 (Video - Jizhun) was approved as the national standard of China in 2006, and several final drafts of the standard have been completed, including AVS1-P1 (System - Broadcast), AVS1-P2 (Video - Zengqiang), AVS1-P3 (Audio - Double track), AVS1-P3 (Audio - 5.1), AVS1-P7 (Mobile Video), AVS-S-P2 (Video) and AVS-S-P3 (Audio). AVS China provides a technical solution for many applications such as digital broadcasting (SDTV and HDTV), high-density storage media, Internet streaming media, and will be used in the domestic IPTV, satellite and possibly the cable TV market. Comparing with other coding standards such as H.264 AVC, the advantages of AVS video standard include similar performance, lower complexity, lower implementation cost and licensing fees. This standard has attracted great deal of attention from industries related to television, multimedia communications and even chip manufacturing from around the world. Also many well known companies have joined the AVS Group to be Full Members or Observing Members. The 163 members of AVS Group include Texas Instruments (TI) Co., Agilent Technologies Co. Ltd., Envivio Inc., NDS, Philips Research East Asia, Aisino Corporation, LG, Alcatel Shanghai Bell Co. Ltd., Nokia (China) Investment (NCIC) Co. Ltd., Sony (China) Ltd., and Toshiba (China) Co. Ltd. as well as some high level universities in China. Thus there is a pressing need from the instructors, students, and engineers for a book dealing with the topic of AVS China and its performance comparisons with similar standards such as H.264, VC-1 and RV-9.

Prof. K. R. Rao received the Ph. D. degree in electrical engineering from The University of New Mexico, Albuquerque in 1966. Since 1966, he has been with the University of Texas at Arlington where he is currently a professor of electrical engineering. He, along with two other researchers, introduced the Discrete Cosine Transform in 1975 which has since become very popular in digital signal processing. He is the co-author of the books "Orthogonal Transforms for Digital Signal Processing" (Springer-Verlag, 1975), "Fast Transforms: Analyses and Applications" (Academic Press, 1982), "Discrete Cosine Transform-Algorithms, Advantages, Applications" (Academic Press, 1990). He has edited a benchmark volume, "Discrete Transforms and Their Applications" (Van Nostrand Reinhold, 1985). He has coedited a benchmark volume, "Teleconferencing" (Van Nostrand Reinhold, 1985). He is co-author of the books, "Techniques and standards for Image/Video/Audio Coding" (Prentice Hall, 1996), "Packet video communications over ATM networks"(Prentice Hall, 2000) and "Multimedia communication Systems" (Prentice Hall, 2002). He is coeditor of the handbooks, "The transform and data compression handbook" (CRC Press, 2001), and "Digital video image quality and perceptual coding" (Marcel Dekker, 2004). He is co-author of the book, "Fast Fourier Transform and Its Applications" (Springer, 2009). Some of his books have been translated into Japanese, Chinese, Korean and Russian. He has conducted workshops/tutorials on video/audio coding/standards worldwide. He has published extensively in refereed journals and has been a consultant to industry, research institutes and academia. He is a Fellow of the IEEE.

Yan Lu is a Partner Research Manager at Microsoft Research Asia (MSRA), where he manages the research on media computing and communication. He leads the team to innovate in the fields of real-time communication, computer vision, video analytics, audio enhancement, virtualization, and mobile-cloud computing. He and his team have transferred many key technologies and research prototypes to Microsoft products such as Windows, Office, Xbox XDK, Kinect Studio, Teams and Skype, and Azure Media Service. Prior to joining MSRA in 2004, he was the team lead of video coding group in the JDL Lab, Institute of Computing Technology, China. From 1999 to 2000, he was with the City University of Hong Kong as a research assistant. Yan Lu has contributed a number of key technologies to international standards such as MPEG-4, H.264/AVC, H.265/HEVC, and AOM AV1. He was also recognized as a key technical contributor and an editor of the first version AVS video standard. He won the State Technological Invention Award (second prize) of China in 2006. Yan Lu is also an adjunct professor at University of Science and Technology of China.

H.261 is a video codec which belongs to the H.26x family of video coding standards in the domain of the ITU-T Video Coding Experts Group (VCEG). It is designed in 1990 for transmission over ISDN lines primarily for video conferences and video telephony. ISDN lines have data rates which are multiples of 64kbit/s. The algorithm operates at video bit rates between 40 Kbit/s and 2Mbit/s. H.261 was the first practical digital video coding standard. All subsequent international video coding standards have been based on the H.261 design. The coding algorithm uses a hybrid of motion compensated inter-picture prediction and spatial transform coding with scalar quantization, zig-zag scanning and entropy encoding.

H.263 is a video codec standard originally designed as a low-bitrate compressed format for videoconferencing. It was developed by the ITU-T Video Coding Experts Group (VCEG) in a project ending in 1995/1996 as one member of the H.26x family of video coding standards in the domain of the ITU-T. H.263 was developed as an evolutionary improvement based on experience from H.261, the previous ITU-T standard for video compression, and the MPEG-1 and MPEG-2 standards. Its first version was completed in 1995 and provided a suitable replacement for H.261 at all bitrates. H.263 has since found many applications on the internet: much Flash Video content (as used on sites such as YouTube, Google Video, MySpace, etc.) is encoded in this format, though many sites now use VP6 encoding, which is supported since Flash 8. The original version of the RealVideo codec was based on H.263 up until the release of RealVideo 8. The codec was first designed to be utilized in H.324 based systems (PSTN and other circuit-switched network videoconferencing and video telephony), but has since also found use in H.323 (RTP/IP-based videoconferencing), H.320 (ISDN-based videoconferencing), RTSP (streaming media) and SIP (Internet conferencing) solutions.

MPEG-4 is a collection of methods defining compression of audio and visual (AV) digital data. It was introduced in late 1998 and designated a standard for a group of audio and video coding formats and related technology agreed upon by the ISO/IEC Moving Picture Experts Group (MPEG) under the formal standard ISO/IEC 14496. Uses of MPEG-4 include compression of AV data for web (streaming media) and CD distribution, voice (telephone, videophone) and broadcast television applications. MPEG-4 absorbs many of the features of MPEG-1 and MPEG-2 and other related standards, adding new features such as (extended) VRML support for 3D rendering, object-oriented composite files (including audio, video and VRML objects), support for externally-specified Digital Rights Management and various types of interactivity. AAC (Advanced Audio Codec) was standardized as an adjunct to MPEG-2 (as Part 7) before MPEG-4 was issued. Initially, MPEG-4 was aimed primarily at low bit-rate video communications; however, its scope was later expanded to be much more of a multimedia coding standard. MPEG-4 is efficient across a variety of bit-rates ranging from a few kilobits per second to tens of megabits per second.

DivX is a brand name of products created by DivX, Inc. (formerly DivXNetworks, Inc.), including the proprietary DivX Codec which has become popular due to its ability to compress lengthy video segments into small sizes while maintaining relatively high visual quality. The DivX codec uses lmost of the lossy MPEG-4 Part 2 compression techniques, also known as MPEG-4 Visual, where quality is balanced against file size for utility. It is one of several codecs commonly associated with "ripping", whereby audio and video multimedia are transferred to a hard disk and transcoded. Many newer "DivX Certified" DVD players are able to play DivX encoded movies, although the Qpel and global motion compensation features are often omitted to reduce processing requirements. They are also excluded from the base DivX encoding profiles for compatibility reasons.

In multimedia, Motion JPEG (M-JPEG) is an informal name for multimedia formats where each video frame or interlaced field of a digital video sequence is separately compressed as a JPEG image. It is often used in mobile appliances such as digital cameras. Motion JPEG uses intraframe coding technology that is very similar in technology to the I-frame part of video coding standards such as MPEG-1 and MPEG-2, but does not use interframe prediction. The lack of use of interframe prediction results in a loss of compression capability, but eases video editing, since simple edits can be performed at any frame when all frames are I-frames. Video coding formats such as MPEG-2 can also be used in such an I-frame only fashion to provide similar compression capability and similar ease of editing features. Using only intraframe coding technology also makes the degree of compression capability independent of the amount of motion in the scene, since temporal prediction is not being used. However, although the bitrate of Motion JPEG is substantially better than completely uncompressed video, it is substantially worse than that of video codecs which use inter-frame motion compensation such as MPEG-1. (One exception may be in surveillance cameras which only take one frame per second, in which time there could be large amounts of motion which MPEG could not compensate for.)There exists a more advanced version of this codec which uses JPEG2000 compression instead of JPEG. This compression format is primarily used in digital cinema. It also is under consideration as a digital archival format by the Library of Congress. 041b061a72

Про групу

Welcome to the group! You can connect with other members, ge...


bottom of page