The last month has had us wrapping up our activities around Cine Gear Expo including our popular “Focus on Cine Lenses From the Buyer’s POV” event and recorded sit down interviews for our “Backlot Perspectives” series, (both now streaming). We sometimes get a little breather after Cine Gear, but this year, we used the time to visit our friends holding the growing midwest event, Filmscape Chicago. We’ve also been busy cooking up lots of events to fill in the rest of the year, and have quite a bit of industry news to relate, so there is no time to kick back this summer.
Industry Tech News includes Blackmagic developing a new camera and workflow for the creation of Apple Vision Pro immersive content as well as their free Digital Film Camera control app, which is now available for Android.  Meanwhile Core has introduced the MoXIE, a large capacity, but still portable on-location power solution, and there is a new Dual Fisheye Lens from Canon for immersive social media content.  AJA has released the Ki Pro GO2, the latest addition to their family of multi-channel recorders, and the US government is considering a ban on DJI drones due to Chinese surveillance concerns.
For his essay this month, James Mathers looks at how AI technology is impacting filmmaking and in particular the field of cinematography.

DCS News

DCS at Cine Gear Expo 2024 – Backlot Perspectives Interviews

Cine Gear Expo LA was held this year at the iconic Warner Bros. Studios backlot and DCS was there to cover it, interviewing leading technologists and industry experts, including (in alphabetical order of company name):

Tim Walker with AJA — Andy Hutton with Anton Bauer — Dan Kanes and Scott Dewald with Atlas Lens Co. — Tobias Frischmuth with ARRI — Matthew Irving with Canon — Steve Manios with Cartoni — Joseph Mendoza with Cineo — Marianne Exbrayat with Dedolight California — Pamela Bloom with Fujifilm — Jameson Brooks with Godox — Jay Margolis with Infinity Photo-Optical — Eric Eggly with Jagoteq — Jeffrey A. Reyes with Lee Filters (a Panavision Company) — Seth Emmons with LEITZ — Matt Frazer with Panasonic LUMIX — Nick Stabile with RatPac Controls — Emily Stadulis with Rosco Lighting — Kazuto Yamaki with SIGMA — Tony Wisniewski with ZEISS

Visit our Cine Gear 2024 webpage to view this year’s entire Backlot Perspectives interview collection:

Also Streaming: “2024 DCS Cine Lenses From the Buyer’s Point of View” recorded at Cine Gear Expo

DCS once again held our popular examination of the Cinema Lens market at Cine Gear Expo with a concentration on choosing the right lenses to invest in. We explored what’s new from some of the top lens manufacturers and how they are reacting to the latest trends such as growing sensor sizes, choosing character vs. optical perfection, metadata, anamorphic cinematography and how they choose which mounts to provide on their lenses. If you’re thinking of investing in Cine Lenses, this event is tailored for you.

This event was recorded on June 7th in a screening room on the Warner Bros. Studios backlot during Cine Gear Expo 2024. DCS Founder James Mathers served as moderator with the assistance of Matthew Duclos, of Duclos Lenses.

Participating manufacturers and their representatives include, (in alphabetical order):

Angénieux, Randy Wedick
ARRI, Art Adams
Atlas Lens Co., Scott Dewald
Canon, Matthew Irving
Cooke Optics, Chris D’Anna
Fujifilm Optical Division, Stosh Durbacz
Infinity Photo-Optical, Ed Stamm
Leitz Cine, Seth Emmons
Sigma, Aaron Norberg
ZEISS, Jean-Marc Bouchut


Industry News

Blackmagic Developing a New Camera and Workflow to Create Apple Vision Pro Compatible Immersive Cinematic Content

In an only very quick mention at the June 10th Apple WWDC, the wraps came off of the Blackmagic URSA Cine Immersive system now in development. Although still far from a shipping product, it is designed to be a full end-to-end workflow with DaVinci Resolve to produce cinematic Apple Immersive Video on Vision Pro. Working together with Apple, Blackmagic URSA Cine Immersive and DaVinci Resolve will give filmmakers the workflow to be able to shoot, edit and deliver Apple Immersive Video.  Based on the new Blackmagic URSA Cine platform, the new Blackmagic URSA Cine Immersive model features a fixed, custom lens system with a sensor that is said to produce 8160 x 7200 resolution per eye with pixel level synchronization. It will have extremely wide dynamic range and shoot at 90 fps stereoscopic into a Blackmagic RAW Immersive file. The new Blackmagic RAW Immersive file format is an enhanced version of Blackmagic RAW that’s been designed to make immersive video simple to use through the post production workflow.

The Blackmagic URSA Cine also features a custom lens system designed specifically for Blackmagic URSA Cine’s large format image sensors with extremely accurate positional data which is calibrated at the time of manufacturing. Plus, the immersive lens data is mapped, calibrated, and stored per eye into the Blackmagic RAW file, so it travels through the post production process in the Blackmagic RAW Immersive file itself. That eliminates the need for multiple files to capture an immersive scene and will all be handled in updates to DaVinci Resolve. Updates will also allow support for a new immersive video viewer which will let editors pan, tilt, and roll clips for viewing on 2D monitors and even allow clips to be adjusted and monitored on Apple Vision Pro headsets for an immersive editing experience. Transitions rendered by Vision Pro will also be able to be bypassed using XML metadata, giving editors clean master files where the transitions are rendered in the Vision Pro. Export presets will enable output into a package, which can be viewed directly on Apple Vision Pro. A specific timeline for the product release and price point has yet to be announced.


The U.S. Government Is Considering a Ban on DJI Drones Due to Chinese Surveillance Concerns

The Countering CCP Drones Act has been introduced in the US Congress by Representative Elise Stefanik (R-NY) which would create a nationwide ban in the U.S. on the use of DJI (Da Jiang Innovations) drones. The legislation proposes adding DJI to a list maintained by the Federal Communications Commission (FCC) under the Secure and Trusted Communications Networks Act of 2019 which would block DJI’s drones from running on communications infrastructure in the U.S. effectively rendering them unusable. Modern drones are dependent on communication infrastructure for core functionalities like GPS navigation, control signal transmission, and video transmission necessary for stable flight, precise control, and real-time visual feedback.

Proponents of the bill cite national security concerns, alleging DJI drones are providing data on critical infrastructure in the United States to the Chinese Communist Party. Their claim is that this has the potential to be used for military or industrial espionage creating technological dominance over the U.S. This bill has the potential to significantly impact the drone industry in the U.S. affecting both professionals and consumers who rely on DJI drones including public safety applications like search and rescue missions. DJI and a coalition of user groups are fighting the ban through lobbying efforts and more. Opponents of the bill argue that such a measure would not only stifle competition, but also perpetuate xenophobia while hindering innovation in the drone industry. DJI also maintains that users can opt out of a feature allowing their drones to collect flight logs, photos or videos as well as disconnecting the flight app from the internet.

The company has dominated both the consumer and commercial markets with an estimated 58% global market share. This is likely due to their relative affordability and constantly offering advanced features, making it easy for beginners to learn how to fly a drone. Beyond the consumer market, DJI has become a fixture in commercial applications including the entertainment industry where they are increasingly used for aerial cinematography. Following approval by the House Energy and Commerce Committee, the bill would still need to make its way through Congress before being presented to President Joe Biden at which point he would have the option to veto. The motion picture industry, among many others, will be keeping a close eye on these developments; if this bill flies, DJI drones in the U.S. will not.


Core Introduces the MoXIE Solo for Dependable and Portable On-location Power and Emergency Backups

Core has introduced the MoXIE Solo – Mobile, Exchangeable, Intelligent Energy solution. The unit is designed to provide dependable power to replace noisy, fume spewing putt-putt generators for on-location shoots, remote film sets, and emergency backups. The MoXIE Solo compact unit delivers 3.6kW AC output with simultaneous DC outputs and is compatible with two distinct cell packs, each with unique advantages. The “Vita” cell is a 3kWh LifePO4 battery known for its robustness and impressive cycle life of over 4,000 cycles. The “Kodiak” cell is a 2kWh Sodium Ion battery that excels in extreme cold, operating efficiently in temperatures as low as -40°F.

Additionally, the MoXIE Solo offers multiple DC outputs (15v, 28v, and 48v), each capable of handling up to 16A each. This capability makes the MoXIE Solo well suited for operating DC input production devices nearby, while not requiring additional power supplies on set and for consistent power of cinematic production accessories or in a remote location requiring reliable energy. Two power tap 14v outputs, along with USBC PD and USBA ports are also included for powering and charging mobile devices and performing firmware updates.

The MoXIE Solo is also equipped with two expansion ports, allowing for the connection of additional MoXIE cell packs directly to the unit, thereby expanding its capacity. This feature enables hot-swapping as well as boosting capacity up to 9kWh of power by coupling three Vita Cell packs. The mating ports on the battery packs serve as external charge ports when not charging through the MoXIE Solo system.

Measuring just 16” x 17” x 24” and weighing only 121lbs (with either cell installed), the MoXIE Solo delivers the same power as a Honda 5k gas generator, weighing over 200lbs. This makes the MoXIE Solo not only a greener and more sustainable alternative but also significantly more portable. The battery cell pack can be removed, reducing the unit’s weight to a manageable 59lbs, with the cell pack itself weighing 61lbs. This modularity enhances portability and ease of transport, allowing one person to easily manage the setup and breakdown of the unit. It can be transported in a car or small production van without the need for a lift gate or forklift, making it incredibly convenient for on-the-go use. For more details visit:


Canon Introduces RF-S3.9mm F3.5 STM Dual Fisheye Lens as an Accessible and Affordable Stereo Content Creation Tool

Canon has announced the new RF-S3.9mm F3.5 STM Dual Fisheye lens, bringing ease, affordability and quality for media creators interested in exploring stereo content creation. The RF-S3.9mm F3.5 STM Dual Fisheye lens is compatible with the EOS R7 camera and will be available in June 2024 at an estimated MSRP of $1,099USD. The new RF-S3.9mm F3.5 STM Dual Fisheye lens makes stereo content creation with a mirrorless camera more accessible and efficient than ever for uses including VR social media.

Designed to empower creators of all types, this lens offers a perfect balance between clarity and usability for vlog-style VR creation. This APS-C Stereoscopic VR lens’ ability to achieve a 144º wide-angle view and utilize equidistant projection makes it ideally suited for every day, virtually hassle-free VR production. Also designed with versatility in mind, this lens permits multiple methods of camera handling, from hand-holding, mounting on a gimbal, or tripod-mounting. Canon’s available EOS VR Utility software, (separately available with a paid subscription,) is designed for a smooth editing process.

Features of the RF-S3.9mm F3.5 STM Dual Fisheye lens include:
● One-shot AF and Left/Right focus Adjustment with Focus ring, helping to create effortless and precise shooting in virtually any environment.
● An Air Sphere Coating (ASC) is a first for Canon non-L Series lenses and helps to minimize ghosting and facilitate pristine image quality.
● Rear filter holder that accommodates both 30.5mm screw-on filters and sheet-type filters, i.e. gelatin or polyester filters.

For additional information, please visit

AJA introduces the Ki Pro GO2, the Latest Addition to their Family of Multi-channel HD/SD Recorders

AJA introduces the Ki Pro GO2, the latest addition to their family of multi-channel HD/SD recorders. The Ki Pro GO2 gives the user the choice of four channels of H.265 (HEVC) or four channels of H.264 (AVC) recording to cost efficient USB 3.0 drives or network storage, with redundant recording and single channel playback. It includes four 3G-SDI and four HDMI inputs for compatibility with a wide range of video sources. See the full list of Ki Pro GO2 feature highlights below, and to find out more, visit:

  • Intuitive operation with an easy-to-navigate web UI that is compatible with standard web browsers, front panel device buttons, and an integrated HD screen.
  • 5x USB recording media ports (compatible with off-the-shelf USB 3.2 Gen 1 media), 4x 3G-SDI inputs, 4x HDMI inputs, 4x 3G-SDI outputs, 1x 3G-SDI monitoring output, 1x HDMI monitoring output, balanced XLR analog audio inputs, mic/line/48v switchable, and an Ethernet LAN port.
  • Real time recording to network storage
  • Onboard exFAT drive formatting
  • Built-in frame syncs
  • Support for HDMI and SDI multi-channel matrix monitoring and HDMI and SDI
  • Enhanced Super Out for monitoring timecode, media status, and audio levels
  • Integrated web-browser UI and front panel button controls, with HD resolution video screen for confidence monitoring and menu information
  • Single channel H.264 or H.265 playback
  • Selectable VBR recording settings with five options
  • Timecode SDI RP-188 Input Support: Time of Day or Timecode Value
  • LTC Support, available using a single channel of Analog Audio In
  • Two channels of embedded audio per video input
  • Compact 1/2 rack width, 2RU height
  • Three year warranty and AJA’s world-class support

Blackmagic Announces Their Free Digital Film Camera Control App is Now Available for Android

Blackmagic Design has announced Blackmagic Camera for Android, which adds digital film features and controls to Samsung Galaxy and Google Pixel phones. This greatly improves the results customers can get so shots can be used for television and film production. Based on the same operating system as Blackmagic Design’s digital film cameras, these professional features give Android content creators the same tools used in feature films, television and documentaries. Support for Blackmagic Cloud allows creators to collaborate and share media with multiple editors and colorists around the world instantly. Blackmagic Camera is now available from Google Play, free of charge.

Blackmagic Camera unlocks the power of the phone by adding digital film camera controls and operating systems so customers can create unique cinematic ‘looks.’ App users can adjust settings such as frame rate, shutter angle, white balance and ISO all in a single tap. Or record directly to Blackmagic Cloud in industry standard files up to 8K. Recording to Blackmagic Cloud Storage lets customers collaborate on DaVinci Resolve projects with editors anywhere in the world, all at the same time. The heads up display, or HUD, shows status and record parameters, histogram, focus peaking, levels, frame guides and more. Show or hide the HUD by swiping up or down. Auto focus by tapping the screen in the area customers want to focus. Customers can shoot in 16:9 or vertical aspect ratios, plus customers can shoot 16:9 while holding the phone vertically if they want to shoot unobtrusively. There are also tabs for media management including uploading to Blackmagic Cloud, chat and access to advanced menus.

Blackmagic Camera for Android Features

  • Works with Samsung Glaxay and Google Pixel phones
  • Shoot in 16:9 or vertical aspect ratios.
  • Stealth mode for shooting 16:9 while holding phone vertically.
  • Capture H.264 with and H.265 with auto proxy generation.
  • Frame rate, shutter speed, exposure, white balance, tint and color space camera controls.
  • Focus assist, zebra, frame guides, histogram.
  • Time of day or run time, timecode recording.
  • VU or PPM audio meters.
  • Thumbnail view of all recorded clips in media tab.
  • Preview clips with scrubber, duration, timecode and file name display.
  • Fully integrated with Blackmagic Cloud and DaVinci Resolve.
  • Record to phone, select recorded clips to share via Blackmagic Cloud or sync automatically. 

AI-ography: How AI Technology is Changing The Way We Create Motion Picture Imagery

JM Headshot2014Med
by James Mathers
Cinematographer and Founder of the Digital Cinema Society



Other than some focus assist features, what we commonly refer to as AI has, as of yet, not reached too deeply into the professional cinematography field, but that is about to change. It is as impressive as it is frightening, and depending on which end of the industry you may be coming from, will probably determine on which side of this spectrum you find yourself. It is also a matter of your level of optimism regarding new technology, but whether the glass is half full, or half empty, it is extremely fluid. I would like to share a few observations from my perspective as a cinematographer on some recent advancements in AI for motion imagery creation.

“World’s First AI-Powered Movie Camera”

“World’s First AI-Powered Movie Camera”…If I saw this headline on April 1st, instead of mid-June 2024, I might be suspicious that it was an April Fools prank. However, the CMR-MI, (short for Camera Model 1), debuted at an advertising convention known as the Cannes Lions International Festival of Creativity, and it’s no joke. Developed by SpecialGuestX and 1stAveMachine, it is an experimental innovation in the field of filmmaking, purportedly the first movie camera designed to integrate generative AI technology directly into the digital capture process.

Looking more like a Kodak box camera from the 1920’s than a cutting edge digital motion picture camera, it incorporates a FLIR sensor, Snapdragon CPU, and viewport to capture real-world footage and transform it using generative AI. The current prototype barely records HD at only 1368×768 resolution, is limited to 12 fps, and with processing in the cloud via a Stable Diffusion workflow, there is a minor delay. However, plans are to reduce latency and beef up the electronics for real-time processing at higher resolutions. The goal is to bring AI into the physical filmmaking process rather than doing everything at a prompt in post, allowing live-action footage to be instantly integrated as animation is being simultaneously created around it.

The CMR-M1 features professional camera accessories such as a matte box and tripod base as well as interchangeable lenses. It also has in-camera editing capabilities. This initial prototype is not for commercial purposes, but designed as part of the research process to create physical interfaces for generative AI. The system is currently equipped with five Stable Diffusion “LoRAs,” (an acronym for Low-Rank Adaptation, a technique designed to refine and optimize large AI models.) They help style images into a variety of forms, in this case, everything from colorful jungle integrations to scenes with opulent decor, tuxedos, and gold coins. The camera also has a slot to insert a “style card”, which has a chip allowing the filmmaker to create unique styles and workflows by creating models trained with their own images and personalized prompts. According to Miguel Espada, co-founder and executive creative technologist of SpecialGuestX, they have designed a camera that serves as a physical interface to AI models. More details and an impressive sample reel are available at:

1 Camera + 1 Crew Member + AI = Comprehensive Sporting Event Coverage

Another example of AI stepping into the Camera department is in the coverage of sports. An early commercial example is from a company called Pixellot which combines artificial intelligence, machine learning, stationary camera systems, software and cloud computing for complex coverage of live sporting events. The Pixellot Show S3 system features a 12K triple camera array in a single unit that is designed to cover an entire field at a sport venue with additional capture units available as an option for more variety of POVs. The scary part for those of us in the camera department is that no camera operators are employed. The system identifies the players, follows the action, and cuts the program via AI.

Traditional sports coverage, even for a smaller live event requires several camera operators, as well as an engineering and directing team to cover the action on the field or court. The cameras feed into a production studio or broadcast mobile unit to mix everything from picture, sound, special effects, graphics and ad commentary live as the game is occurring. It is a very expensive proposition that has only been justified up until now by covering the very high end of sporting events.

The Pixellot system proposes to provide coverage with a one-man-band type of operation. The current business model is not to try to tackle events like the Super Bowl, but the many, many games of lower leagues, niche sports, lesser known college games, or even high school matches, and little leagues. However, the capabilities of AI are accelerating at an exponential rate, and it may not be too long before these systems are starting to cover more mainstream professional matches.

You can be sure that AI will be getting some trial runs at the 2024 Olympics coming up quickly in Paris. There are so many contests happening simultaneously, and not all are popular enough to justify round the clock coverage, which makes a great use case for such technology. AI voice generators also make it possible to convert text into speech almost instantaneously in a wide variety of languages and voices. In fact, NBC has announced they’ll be using AI to replicate the voice of top sportscaster Al Michaels to complement its traditional coverage of the games. A tool, called “Your Daily Olympic Recap on Peacock,” will use AI software to create a 10-minute personalized playlist of event highlights from the previous day narrated by an AI recreation of Michaels’ voice, which will, they claim, match “his signature expertise and elocution.”

I’m thankful they didn’t have this kind of technology when I worked on the 1984 Olympics here in Los Angeles. I had the wonderful job of covering New Zealand and Mainland China. It kept me traveling from venue to venue for 28 straight days to whichever events those two countries were competitive in. It was certainly preferable to being stuck at a single venue on the same shot, which would have quickly become tiresome. It was the highest income producing job I had that year, and probably that decade. The Olympics will be returning to L.A. in four years, and I feel for the up-and-coming camera personnel whose similar job opportunities will surely be diminished by the time 2028 rolls around.

No Cameras Needed Here

Generative AI has also been quickly invading the domain of filmmaking, able to create photo realistic moving images that could put a real dent in the livelihoods of those who sell stock footage, provide aerial and drone services, as well as animators. If you have any doubts, just take a look at some of the user generated samples on the OpenAI Sora demo page:

Be sure to check out the aerial following of an SUV on a mountain road, the “drone shots” circling an ancient church on the Amalfi Coast, the panoramic of the Big Sur coastline with waves crashing on the rugged rocks, the golden retriever puppies playing in the snow, or the woolly mammoths charging toward the “camera.” These were all created by adding text prompts into Sora, OpenAI’s generative text-to-video model.

There is no need to go and shoot custom elements; Sora is able to generate complex scenes and accurate details of the subject and background. The model also understands how those things exist in the physical world, having been trained with the detail and context to create the imagery. The term “script to screen”, which used to connote a sometimes years long process involving hundreds of artists to create such content, can now be accomplished by typing prompts into an AI image generator.  The above referenced samples are from OpenAI’s Sora, but there are several competing companies constantly pushing the AI envelope including Stability AI, Pika, Runway, and Luma Labs.

You will not see too many long sequences or those with a lot of close ups of human interaction. It is still challenging for most generative video tools to maintain consistency over a longer sequence, and the models also struggle with anatomical details like hands and faces, but it is amazing what it can now do, and it is only going to improve with time. A short narrative made with Sora, called “Air Head,” could pass for real footage if it didn’t feature a man with a balloon for a face.

New AI Tools Being Used To Enhance Traditional Narrative Techniques

An example of AI working in concert with traditional filmmaking techniques is Here, an upcoming feature directed by Robert Zemeckis starring Tom Hanks and Robin Wright to be released in November. The story revolves around the events happening on a single spot of land following its inhabitants from the past and into the future. An AI technology known as “Metaphysic Live” is employed to face-swap and de-age the actors in real time as they perform instead of using additional post-production processing methods. While films such as The Irishman, and Indiana Jones and the Dial of Destiny have previously used different postproduction techniques to de-age their characters, it’s been an extremely long and expensive process. With AI-generated imagery integrated live on set the performers can test their acting choices for their various ages in the film and have that feedback loop with the director. To see the trailer for Here, visit:

Of course, I’m only scratching the surface here, and there are so many new iterations and use cases for AI constantly being developed. If we plan to stay active and relevant in the entertainment industry, we can’t fear new technology or simply sit back and wait for it to usurp our professional roles. Instead, we need to learn to employ AI as we have when other technological advancements or disruptions have occurred in our industry. We need to transition and adapt as we have before from analog to digital tools, from standard def to HD, then 4K, and beyond, from tape to disk and now to the cloud. However, employing new tech doesn’t mean we have to abandon our filmmaking skill set. Whatever the technology, it is still all about storytelling. If we find new ways to tell those stories, we will only add to our filmmaking abilities. And there will always be a place for “organic” content creation and our trusted tools, like capturing on celluloid.

Motion picture technology is an evolutionary process and the rate of change is increasing exponentially. So buckle up and be ready to continue the journey. The Digital Cinema Society will be along for the ride, helping where we can to keep you current on new technology while honoring the tools and techniques that have been developed over more than a century of motion picture production.


The term “AIography” has been proposed by veteran Editor and Technologist Lawrence Jordan, ACE. He has created a website dedicated to keeping up with AI technology in the motion picture industry. You can follow along at:




Another great resource is Curious Refuge founded by the husband and wife team of AI Educators, Caleb and Shelby Ward. They offer classes to master the latest AI tools, and host a collection of user generated short film samples on their Curious Refuge website:

DCS Member Discounts

See all the Professional Industry Offers Available to Current Members along with Access to FunEx for Exclusive Member Discounts on Theme Parks, Movie Tickets, Hotels, Gym Memberships, and Much More on the DCS Member Discount Page:

How to Claim Your DCS Member Discounts on FunEx for THEME PARKS, MOVIE TICKETS, HOTELS, GYM MEMBERSHIPS, and More

The Digital Cinema Society has partnered with FunEx to offer an amazing discount platform for our U.S. members. Once registered on Join It current DCS members will receive access to 100,000+ discounted offers at up to 55% off. Save on theme parks, movie tickets, hotels, gym memberships, retail and dining nationwide. Check out the daily deals on everything from apparel, electronics, mental wellness, pet insurance, and much more today on your DCS FunEx member portal.  Current US based members can visit the DCS page on Join It to register:  If you are not currently a member, you can apply to join DCS here:



Although we have currently made member dues optional due to the financial stress felt by so many during the strike, we still are in need of your contributions to continue our mission and maintain our services to members. So if you can afford it, please take a few minutes to visit the self-service membership portal where you can make a donation, check your membership status, and even download a membership card to show for special events, discounts, and award consideration screenings, (which we expect to be starting up again soon).
  1. visit:
  2. Enter your email address
  3. If you haven’t already chosen a password, Join It will send you a link to create a password.
  4. Once your password is set, you will be redirected to your membership page.
Here, you will also be able to access your digital DCS membership card. (*We no longer mail physical membership cards).
Please note: If your email address is not recognized by the Join It system or if you believe that your membership status is incorrect, please contact us at:
DCS is a 501(c)3 nonprofit and donations  can likely be claimed as deduction on your US federal taxes.  If you prefer, you can simply follow the convenient PayPal link, (using any major credit card, and you don’t need to be signed up for PayPal,) or you can send payment to our offices at P.O. Box 1973 Studio City, CA 91614, USA.

Donate to DCS:



As always, we want to send out a big thanks to all “Friends of DCS,” whose support makes it possible for us to continue the DCS mission of educating the entertainment industry about the advancements in digital and cine technology:

AbelCine – Adobe – AJA – Angénieux – Anton/Bauer – ARRI – Atlas Lens Co. – Avid – Band Pro – Blackmagic Design – Brokeh – Canon – Cartoni – Cineo – Cinnafilm – Codex – Cooke Optics – CORE SWX – Dadco/SunRay – Dedolight California – Fiilex – FootageBank – – Fujinon – Godox – Infinity Photo-Optical – Jagoteq – Kino Flo – Leitz Cine Wetzlar – Litepanels – Luminys – Nanlite – Nanlux – OWC – Panasonic Lumix – Panavision – Quasar Science – RatPac Controls – Rebel Marketing – Rosco – SIGMA – SmallHD – SUMOLIGHT – Teradek – The Studio-B&H – Tiffen – TRP Worldwide – Wooden Camera – ZEISS


Follow DCS on Social Media

Don’t forget that the Digital Cinema Society has a Facebook fan page. Check in for the latest news, event details and general DCS hubbub at: X you can follow us @DCSCharlene and look for us on Instagram as digitalcinemasociety.
Also, get involved on the official DCS Facebook Group at: Here, DCS members and like minded individuals and organizations can share event notices and discuss topics related to cine technology and industry trends.

Our Home, The Digital Cinema Society:

“It is not the strongest of the species that survive, nor the most intelligent, but the most responsive to change.” Charles Darwin