DeNoise software

mickyates AI, Artificial Intelligence, Candid, Mick's Photo Blog, Photography, Processing, Technology Leave a Comment

I have been using Topaz DeNoise software occasionally (though I use Topaz Sharpen a lot). Both are billed as ‘AI’, and both give good results. In the past few days Adobe have added ‘DeNoise’ to Lightroom, again billing it as AI. I decided to do a quick and dirty side-by-side test.

Yesterday I took an against the light photograph of our good friend Rod Higginson using my iPhone 13 Pro Max set to record RAW (DNG). First I adjusted the image in Lightroom. As it was a stop underexposed, when this was completed the image was rather noisy. Here is a close-up crop. Click on the image to see it larger in your browser.

Original – JPG export of a DNG (RAW) file

I then processed the image in Topaz DeNoise AI. I set this as ‘low light, 50 denoise, 50 sharpen and 50 recover detail’. Here is the result, which I found pretty good. Interestingly outputting to TIF seemed to also slightly brighten the image.

Topaz DeNoise – JPG export of a TIF file

I then used the new Adobe DeNoise tool. This has limited room for tailored settings. To my eye, it is a tad sharper with equal noise reduction to the Topaz result,  as seen on Rod’s hand. Also, as the output is an enhanced DNG it did not get the same brightening lift as via the TIF from Topaz.

Adobe DeNoise – JPG export of an Enhanced DNG file

Finally, here is a side by side screen shot of the results from Topaz DeNoise AI (Left) and Adobe DeNoise (Right).

Although this is very ‘quick and dirty’, and I could adjust the Topaz settings more, seems to me that Adobe is onto something very good here, not least outputting as DNG which allows better fine control than TIF post this de-noising step. It is also worth noting that Adobe make this available within their current tool / app set subscription, so if they also up the game on sharpening, honestly I do not think I will pay extra for Topaz. Sorry, guys.

One point I missed – and which David Collyer commented on. Adobe DeNoise does not yet work on files other than RAW. It handles my Leica DNGs and Nikon NEFs. But it can’t yet be used on, for example, slides or negatives scanned as TIFFs which is my current (and historical) default. I hope that gets fixed.

Anyone else run tests?

Ray Tracing

mickyates AI, Art, Artificial Intelligence, Computer Generated, Ideas, Mick's Photo Blog, Photography, Technology Leave a Comment

In the early days of my use of the Internet, I used to dial into bulletin boards (BBS), from Japan to the West Coast. The first useful web browser, Mosaic, only appeared in 1993. Several of the BBS’ were all about computer created graphics, as I was fascinated by the ‘hyper real’ aspect of images made by ray tracing, which used lines of computer code (i.e. text) to instruct the computer how to create images. These got ever more sophisticated, and whilst early artworks inevitably looked like computer graphics, these days some are not so easy to spot. Given the current developments in AI with DALL-E and the like, I thought I would go back and revisit the history of ray tracing, and in particular POV-Ray, THE application to create images with this technique.

The header is from 1993 by Mike Miller, whose artworks used to wow everyone. Interestingly, he has resurfaced in 2023, having retired from a very successful graphic design career.

I got a little help from ChatGPT in the search and references, so what follows is a blend of those searches with my thoughts.


Ray tracing is a rendering technique used  to create photorealistic images by simulating the interaction of light with objects in a virtual scene. It traces the path of light rays from the camera through the scene, accounting for various phenomena such as reflection, refraction, and shadows. Pioneering the use of ray tracing in computer graphics, the Persistence of Vision Raytracer (POV-Ray) is an open-source software application that has evolved over the years.

The concept of ray tracing can be traced back to Albrecht Dürer in the 16th century, who described multiple techniques for projecting 3D scenes onto an image plane. In 1968, Arthur Appel proposed ray casting algorithmns as a means to create realistic images of three-dimensional objects. Turner Whitted’s 1980 paper, “An Improved Illumination Model for Shaded Display”, that introduced the recursive ray tracing algorithm, which accounted for the reflection and refraction of light rays.

Ray tracing programs follow (trace) a path of a ray of light, and when the ray intersects an object, it ‘sends off’ secondary rays (e.g. reflection, shadow, transmission) and determines how those rays in turn interact with the scene. The basic algorithms allows for hidden surface removal, multiple light sources, hard shadows, reflections, and transparent refractions. Extensions can achieve, amongst many other things, soft shadows, motion blur, blurred reflections (glossiness), depth of field and translucent refractions. All sounds simple but the code is complex and renders can take a long time. For example, Jonathan Hunt’s ‘Pebbles‘ (below) took 4.5 days in 2008!

In the early 1980s, David Buck, a computer science student, began developing ray tracing software called DKBTrace. He released version 1.0 in 1987. Meanwhile, Aaron Collins was independently working on a similar project. In 1989, Buck and Collins merged their projects, resulting in the first version of POV-Ray. Mike Miller also joined the project, and his early artwork created by POV-Ray became legendary.

Amongst the technical innovations from that time were:

  1. Bounding Slabs implementation: optimized the rendering process in POV-Ray by reducing the number of intersection tests needed. It enabled faster and more efficient rendering of scenes.
  2. Procedural Textures and Patterns: enabled the creation of more complex and realistic materials in their scenes.
  3. The POV-Ray Scene Description Language: this  provided users with a powerful and flexible scripting language for describing 3D scenes.

Here are a few examples of work created by POV-Ray that I particularly like, ranging from the obvious computer graphics in the 90’s to something way more subtle and realistic. Some recent ones are hard to tell apart from photographs, I think.

Mike Miller. 1993. Fish & Locomotive.

Mike Miller. Undated. Car Kit.

Jaime Vives Piqueres. 2005. Christmas Baubles.

Gilles Tran. 2006. Glasses.

Jonathan Hunt. 2008. Pebbles.


To bring things together, I asked ChatGPT to write some POV-Ray code for me 🙂

Prompt: Generate the POV-ray code for a globe of the world in realistic colours on a chessboard

This code sets up a camera, a light source simulating bright sunlight, an apple shape composed of a sphere and a cylinder, and a chessboard using the checker pattern on a plane. First, download a world map in equirectangular projection (a common format for globe textures) and save it as “world_map.jpg” in your project folder. You can then customize the colors, materials, and positions as needed to further refine your scene. Be sure to save this code in a text file with the “.pov” extension, and then render the scene using POV-Ray.

// Camera settings
camera {
location <0, 15, -20>
look_at <0, 0, 0>
angle 45

// Sunlight-like light source
light_source {
<0, 100, -100>
color rgb <1, 1, 1>

// Globe of the world
sphere {
<0, 3.5, 0>, 3.5
texture {
pigment {
image_map {
jpg “world_map.jpg”
map_type 1
finish {
ambient 0.2
diffuse 0.8
specular 0.5
reflection 0.1
rotate <0, 0, 0>

// Chessboard
plane {
y, -0.5
texture {
pigment {
color rgb <1, 1, 1>
color rgb <0, 0, 0>
finish {
ambient 0.2
diffuse 0.8
specular 0.4
scale <8, 1, 8>

I saved the code from ChatGPT as a .pov text file, added a graphic of earth from NASA and used it in the latest Mac version of POV-Ray. Et voila …,

Point proven in several ways 🙂

In any event, when people start debating today about using text to create stunning images via AI, there is some serious and indeed illustrious history. The difference is ofc that I didn’t have to write the code, just input some simple everyday language. The biggest difference, though, is that AI tools such as DALL-E have a vast dataset of existing images to work with, scarped from the web, whilst ray-tracing starts from scratch.



Turner Whitted. 1980. An Improved Illumination Model for Shaded Display. Holmdel, New Jersey: Bell Laboratories.

David K Buck. The early history of POV-Ray.

A Brief History of the POV-Ray Source-Code Base.,Collins%20between%201988%20and%201991.

POV-Ray Scene Description Langauge

Introduction of the Raytracing Technology Part 1. 2022 UWA Technology, Shanghai.

Ray Tracing (Graphics).

James Buck. 2019. The Ray Tracer Challenge. Pragmatic Bookshelf.

Dirk Bartz, James T. Klosowski, Dirk Staneker. 2005. Tighter Bounding Volumes for Better Occlusion Culling Performance. Visual Computing for Medicine, University of Tubingen, Germany / IBM, T.J. Watson Research Center, Hawthorne, NY, USA.


NOTE: Apologies to David K Buck as the original version of this post had some erroneous references. Some were old, third party links but it is also now clear that ChatGPT ‘hallucinated’ in turning historically accurate facts into false links, which I should have double-checked. Lesson learned on AI.




Mike Miller:

Gilles Tran:

Jaime Vives Piqueres:

Christoph Hormann:



When I asked ChatGPT to list a few historical exemplars of artwork, here is the list. Sadly, I am having trouble finding the originals … more work needed. Or did ChatGPT hallucinate again?

  • The Chessboard” by Jaime Vives Piqueres (1994): This artwork features a chessboard with intricately modeled pieces, showcasing the abilities of POV-Ray to render realistic materials, reflections, and shadows.
  • The Temple of the Dragon” by Carlos Prados Ramos (1995): This piece portrays a temple with a dragon statue, demonstrating POV-Ray’s capability to generate complex scenes and detailed geometry, as well as atmospheric lighting.
  • Cathedral” by Dan Farmer (1996): This image of a gothic cathedral highlights the intricate architectural elements and atmospheric lighting made possible through POV-Ray’s rendering capabilities.
  • A Journey Through Fractal Space” by Paul Bourke (1997): This work demonstrates the power of POV-Ray to render complex mathematical objects, such as fractals, with a high level of detail and realism.
  • Lighthouse” by Andreas Stöhr (1999): This image of a lighthouse in a stormy sea showcases the software’s ability to create dynamic, photorealistic scenes with natural elements like water, clouds, and waves.
  • A Trace of POV-Ray” by Gilles Tran (2004): A tribute to the 15th anniversary of POV-Ray, this image showcases the software’s ability to render complex shapes, textures, and lighting effects.
  • Dodecahedron” by Jaime Vives Piqueres (2009): This image features a dodecahedron with intricate interlocking gears, demonstrating POV-Ray’s ability to render complex geometry and reflections.
  • Chess Set” by Jaime Vives Piqueres (2011): This detailed chess set highlights the software’s capacity to create realistic materials, such as wood, metal, and glass, as well as the intricate lighting and reflections that contribute to the photorealistic quality of the image.
  • Beneath the Sea” by Christoph Hormann (2012): This underwater scene showcases POV-Ray’s ability to render natural environments, volumetric lighting, and caustics, creating an immersive and realistic underwater experience.
  • The Library” by Jaime Vives Piqueres (2015): This image of a library interior highlights the software’s ability to handle large scenes with numerous objects, materials, and complex lighting, resulting in a visually stunning and atmospheric composition.