Capturing British Beauty With Photogrammetry

Paul Dickinson shared some of his thoughts about creating beautiful 3d environments and showing them to the world in digital form.

Paul Dickinson shared some of his thoughts about creating beautiful 3d environments and showing them to the world in digital form.

Hello everyone, my name is Paul Dickinson and I live in the beautiful county of North Yorkshire in the United Kingdom.

I am a web developer by trade, but have always been interested in computer graphics, both 2D and 3D. My interest in this area of computing started way back in the 1980’s, when, unfortunately computing power and resources were very limited … we are taking 48kb (yes, you read it right … 48kb) of memory and hard disk drives with a whopping 52mb of storage …. Not to mention loading software from cassette tapes …. Wow, how things have changed !!

As you can imagine, these physical restrictions made achieving even a fraction of what can be produced graphically today nigh on impossible, but back then, that was all we knew, and we could never have imagined in our wildest dreams the sheer power that is available at our fingertips today.

Anyway, enough of my ramblings about the ‘good’ old days, lets fast forward to the present, and why I am writing this article – Photogrammetry and 3D asset optimisation.

For many years I have been an avid photographer, then just over a year ago I happened across a mystical subject called ‘Photogrammetry’. Well, my geeky and creative inner self was immediately chomping at the bit to find out more about this interesting subject.

I am sure that the majority of you are aware of what photogrammetry is, but for those of you who aren’t, put simply, it is the process of surveying a subject using 2D stills, then processing said images with clever software such as Agisoft Photoscan, Reality Capture or some other off the shelf product, to produce a 3D asset. Obviously there is a lot more to it than that, but in a nutshell, that is how your 3D model comes into being via photogrammetry.

I couldn’t believe my luck in finding photogrammetry. Suddenly, I was able to take my passion for photography and match it up with my long time interest in 3D and CGI, enabling me to produce extremely detailed 3D models that could then be utilised in the commercial arena such as gaming and film.

Why is Photogrammetry the next big thing?

Well, actually, photogrammetry isn’t that new and has been around for a good few years, starting out in its basic form in the mid-19th century at the start of ‘modern day’ photography. Obviously, things have moved on a lot since then and computers started getting involved in the late 1980’s. It is only in the last few years that the biggest jump in technology has allowed photogrammetry to spread its wings feeding off fast processors absurd amounts of RAM and optically superior photographic hardware, and all at a very reasonable cost. Now, how does this help gaming and film/tv? … read on …

The length of time needed to create 3D assets from scratch for use in computer games or film and tv can be incredibly high, equating to lengthy production schedules, and of course, increase in cost. The introduction of photogrammetry means that real life subjects can be translated with great accuracy into a digital model, thus cutting out a lot of the hard ground work and error in creating a model from scratch. The setup required to carry out this task is a relatively small outlay in comparison to path taken by hand created geometry. That said, even models generated using photogrammetry still require considerable TLC, requiring mesh clean up, retopology, mesh optimisation, UV mapping, and the baking of numerous texture maps to assist asset optimisation, for gaming in particular. The better the kit used, the less that would need to be done with regards to mesh cleanup, but it is still a necessary step in the process. Speed, Scale, detail, and accuracy is what makes photogrammetry so ground breaking, and heralds in a new era of affordable 3D assets that can be used as base meshes that can be translated into something completely different, combined to make larger assets, or just used ‘out of the box’. Asset libraries containing high quality digital models are growing larger with a wide variety of focus, from human anatomy, to rocks, trees, surfaces, guns, and buildings … the list goes on … if it can be scanned in the real world, chances are you will find it in a library.

1 of 2

I have recently been approached by an innovative software house in the USA – Blackthorn Media (http://blackthorn-media.com/) who have a great deal of experience in special effects in the film industry such as ‘The Matrix’, ‘Snow White and the Huntsman’, ‘The Life of Pi’, and ‘The Hunger Games’ to name a few. These guys are creating a ground breaking multi part virtual reality story based game called ‘The Abbots Book’. I have supplied them with many models that suit their environment, and done so in a relatively short space of time. As a result this has helped free up the bottleneck of 3D asset creation and allowed the team of Academy & Emmy Award winning content creators at Blackthorn Media to concentrate on the game story, development, and mechanics. This approach to the workflow saves time and money and maximises creativity and quality. I think we can all see that this will be a massive benefit for any indie game developers who otherwise would not have the funds, or access to enough skilled modellers to be able to produce high quality results, certainly not in the very short time they have to deliver to market.

Ok, so that is what photogrammetry is, and how it is becoming the next big thing for 3D asset creation. Now, what do you need to create these detailed models, and how do you go about its creation?

What equipment do you use?

  • Sony RX100 M3
  • Nikon D200 DSLR
  • Monopod
  • Wacom graphics tablet
  • Turntable and 3 x 64 LED lighting units (for object scanning)
  • 2 x Lightboxes, each using an 135W daylight bulb (for object scanning)

To be honest, you don’t really need a lot of equipment in order to have a go at photogrammetry. Anything from a mobile phone up to a pro DSLR camera will do. The emphasis here though, is the words ‘will do’. If you want good results from a scan, you will need good clean, well balanced images that pack a lot of detail – most mobile phones unfortunately won’t cut it in that respect, although they will yield acceptable results.

Professional scanning services use arrays of multiple cameras in conjunction with a lighting cage, and possibly a rotating platform. These setups don’t come cheap, but manage to capture a subject very quickly with very even light falling across the stage, so make sense commercially. This approach isn’t very portable though, so if you need to scan an object out in the field, you need to work in a lightweight and portable fashion.

The hardware that I use at the moment is either a Sony RX100 M3 compact camera or a semi pro Nikon D200 DSLR with 35-70mm lens. The Sony is great if you want to travel light, the Nikon is great if you want more control over your scan/shots, particularly if you need to get closer to a distant subject. The Sony is 20 megapixel, and the Nikon is 10 megapixel .. in the scheme of things, the number of megapixels a camera has, has very little to do with the quality of your scan. There is absolutely no reason why a 6 or 7 megapixel camera couldn’t yield superior results to one of its bigger megapixel rivals. Quality of the glass and sensor are what counts – remember it’s not the size that matters, it’s what you do with it ?

If you have the time, a tripod could be beneficial, but I have to admit, if you use a fast enough shutter speed, you more often than not, can get away without using one. One caveat to this though, is a monopod … one of these fella’s can really help if the subject you are trying to capture is higher than yourself … yes you do get funny looks, even comments, from people, but hey, we artists have to suffer these hardships in order to achieve perfection ?

1 of 2

What software do you use in your photogrammetry workflow?

A brief outline of what I use each piece of software for is as follows:

Photoscan is used to process the dataset for the model you wish to create (the dataset being the photographs of the object). With this tool you can generate sparse point clouds, dense point clouds, meshes (generated from the point cloud data), and full textured models.

Meshmixer and Meshlab are used to clean and repair the 3D mesh that is output from Photoscan

1 of 2

3D Coat is used to paint missing and erroneous detail in generated textures, as there will always be some texture re-work due to the complex nature of a lot of generated 3D models.

InstantMeshes is used to decimate/re-topologise the mesh appropriate for the market that requires it e.g. low poly quad mesh for gaming, high poly tris and quad mesh for CGI, etc.

ShaderMap 3 is the perfect tool for baking the various textures that you need in order to optimise and improve the visual quality of your model. Within ShaderMap, you can bake anything from Ambient Occlusion maps through to Normal maps, and this can be done from a 2D texture as well as from your 3D model (using a lower poly cage to project the map details onto)

The above are the main tools that I use in my workflow, but workflows are very dynamic, and there is always more than one way to skin a cat. The process of generating and optimising a 3D asset is predominantly the same, but the choice of tools that are available to help you achieve your required result are many, ranging in price from free to unattainably expensive for most mere mortals.

How do you approach a survey and actually create a 3D Asset?

First off, is the weather and lighting conditions. One of the most important aspects of photogrammetry, is that no matter what you are capturing, you need a good bright, but diffused light. Bright allows you to close the aperture of your camera down to maximise depth of field and capture nice sharp images with minimal noise present. Diffused light will mean that shadows on the object you are capturing will be kept to an absolute minimum … the light needs to be as flat as possible without harsh highlights or strong shadows. The same applies if you are capturing using a turntable indoors, your lighting needs to be diffused and bounced around the object evenly .. a light tent is perfect for this. There isn’t really any best time of day to carry out an external survey of an object/scene as long as the lighting conforms as mentioned above.

Secondly, the process of capturing has to be approached very logically. Lots of disparate photos of a subject won’t cut it. At the end of the day, the algorithms that produce models via photogrammetry are using very advanced pattern matching routines. We as humans can perceive the world around us adeptly, but unfortunately a computer is dumb, it doesn’t know what it is looking at and needs to be ‘taught’ to understand the depth and detail given to it via your image dataset. With this in mind, the photos that make up the survey need to be sharp, detailed and overlap each other by at least 50%. You must also photograph your subject from as many angles as possible so as to capture high detail in crevices, corners and overhangs. Often when you see the camera placement interpreted by photogrammetry software, you will notice that the person will have arced around the subject in many sweeps, both laterally and vertically (if possible).

Quite a number of my 3D Models cover quite large areas, not just a single small object such as a statue or rock. Unfortunately this can quite often present an operational problem. You can usually gain good access around a smallish individual object and thus get good coverage, but bigger means access is varied and the survey gets a whole lot trickier. Quite often parts of a large object/scene can’t be accessed close up and may involve shooting from different sides of a valley, in much wider sweeps around the subject, different banks of a river, etc … so how do you ensure that your photos continue to make sense to the computer? This is where ‘tie’ points come in, you need to look for patterns/detail that can be used as an anchor point and seen from completely different shooting positions, and make sure that there is enough overlap for the computer to understand that although some parts of an object/scene are not visible, these anchor points throw the software a lifeline to keep the chained pattern matching running smoothly.

Another hurdle to get to grips with larger surveys is that they quite often have so many surfaces to capture, it is quite easy to get lost and forget what you have taken, and that you have, in the end, shot a complete dataset that matches up. An example is a bridge (see below, blue rectangles = camera position), it seems simple to look at, but think about how many planes there are to capture from many (and I mean many) angles .. you have the cobbled river bead, the stone river walls, the grass and path crossing the bridge, the bridge walls from either side and from above, the wall running alongside the path, and the underside of the bridge. Each of these planes will need at least two passes of photos with at least 50% overlap and enough tie points to link each plane to the other. It is so easy for a survey to get away from you with this many things to remember, only to find when you get back to base, that the model cannot be generated successfully. A logical approach has to be adopted and rather than getting bogged down with what still needs to be photographed, it’s best to concentrate on one plane at a time, and then, if necessary, consider what needs to be done to get them all to match up in the 3D world.

Unfortunately, the more photos you supply for the software to process (and/or the higher the resolution e.g. 20 megapixel), the slower and more memory hungry the software gets, sometimes resulting in system crashes. A happy balance has to be struck … give the software enough to crunch without bringing your system to its knees whilst delivering a high quality scanned 3D object. Obviously, some software is better than others at handling resources, I don’t think that Agisoft Photoscan is the best in this department, but its main rival, Reality Capture, is much better behaved .. but both can produce superb results.

Do you produce low or high poly assets, and do you optimise them in any way?

I always start with a very high poly count, highly detailed model, and use this as the master to generate optimised 3D models from. Once you have this ‘master’ model, you can decimate, retopologise and remap the UV’s for pretty much any clients who approach you, be it from the gaming industry, right up to use as CGI in films. One such platform that I have to decimate for is Sketchfab.com. I may start with a 20 – 40 Million poly asset, then decimate it down to 1 million polys (or less) purely and simply because of bandwidth and performance on the site (if you want to take a peek, you can view my profile here.

As for other aspects of optimisation, once again, it comes down to how the model is going to be utilised. If the 3D model is going to be used as a game asset, then the poly count needs to go as low as possible without losing all definition in the skin/mesh, and lean on the various texture maps to help ‘fill in’ the missing geometry/detail. If the 3D model is going to be animated in some way, then re-topologising to a quad poly mesh is a must to avoid ugly and unwanted creases in the animation process. Static models could quite easily remain as tri poly mesh – there is no need to make extra work if the asset is fit for purpose. Finding the sweet spot for the size of the various texture maps is important, it is one thing delivering a perfect low poly model to a client, only to have it unusable because of long load times and high memory usage due to large texture sizes. Some textures can be a lot smaller than others e.g. The Albedo or Diffuse map will need to be one of the largest texture maps as this is what ‘dresses’ the model and helps to sell the realism, whereas Ambient occlusion maps can get away with being a bit smaller as they merely help shadow detail.

Providing the client with a number of format options also needs to be considered, such as .obj, .fbx, .ply, etc, and possibly reference textures, along with different texture sizes to cover different workflows such as draft rendering right through to final production.

So, what about the accuracy of scans using the photographic method?

If you want to map out spaces and objects with the highest accuracy, you wouldn’t use the photographic approach. Although photogrammetry is pretty accurate, Lidar (using lasers) is your best option. Lidar is used in construction and scientific surveys where accuracy is everything and costs a small fortune in equipment. The problem with Lidar is, although it is very accurate, it is not very adept at producing highly detailed textures, which is what we all need for our sexy CGI, be it for use in games and or film/tv. When it comes to these two markets, pretty good accuracy is all we really need, and this is where photogrammetry comes in. The trade-off is not having millimetre perfect measurements, but the assets produced have a lot more presence and use visually. What I have said above makes photogrammetry sound a bit hit and miss when it comes to accurate representation of an object or scene, but in actual fact, if done properly I would challenge anyone to notice any difference if they are not number crunching 3D data to the highest achievable levels.

1 of 2

Conclusion

Photogrammetry is a fascinating subject that is a mere starting point in generating 3D assets. The process beyond photogrammetry is still very necessary such as texturing, optimisation, cleaning of meshes, retopologising, and UV remapping to mention just a few, and each of these areas are a topic in themselves. I hope this short article has made it clear as to why photogrammetry is becoming ever more important in today’s industry, helping to keep cost down, maximising quality and enabling studios to focus on story/plot, mechanics and polish.

If you have read this far, I thank you for your time and hope this article has been both interesting and helpful. You can find my profile on Sketchfab where you can view my full portfolio and navigate my models in real-time 3D.

If you have any questions, would like to purchase a model, or require a bespoke capture, feel free to contact me and I will be more than happy to try and accommodate.

Paul Dickinson, Digital Artist

Join discussion

Comments 8

  • Will

    I have some imaging samples from using OpenDroneMap (WebODM) to process drone imaging into georeferenced photos and textured meshes. I did a video series on this you can see here: https://www.blend4web.com/en/forums/topic/3428/

    0

    Will

    ·6 years ago·
  • Paul

    Hi Milo, thank you for your comment ... yes I know what you mean ... it can get quite messy very quickly, particularly with complex subjects ....

    0

    Paul

    ·6 years ago·
  • Paul

    Hi Martin C, thank you for your comments.
    3D and CGI is not my main occupation, so I cannot really refer to myself as a 3D Professional ... likewise, I think 'dabbler' is now the wrong label, as 3D modelling is now taking up more and more of my time ;-) ... there is definitely money to be made in this arena, and to be honest, I much prefer being creative in 3D rather than doing my day job.

    Just dive into photogrammetry, it is a fascinating subject, and, depending on your level of knowledge, can be quite a steep, but fun, learning curve .. there is always new stuff to learn and improve on as the subject is so vast in nature.

    The cart bridge scan took about 1 hour to survey, consisting of 137 photos (not one of my largest surveys ;-), but still a good size) .... luckily, mesh cleanup, texture cleanup didn't need too much work ... approx 2 hours work for that. As for the creation of the model in Photoscan, which includes sparse point cloud, dense point cloud, mesh generation, and texture creation, I would say approx 3 hours processing

    0

    Paul

    ·6 years ago·
  • Milo

    nice progress.. you will start facing some major problems, moving forward with photogrammetry.. it is a dirty, frustrating job once you get to a certain level of quality..

    0

    Milo

    ·6 years ago·
  • Martin C

    Just one other thing, you consider yourself only a 3d 'dabbler', is there either not enough money in this or you prefer your day job?

    0

    Martin C

    ·6 years ago·
  • Martin C

    Really good read!.. Photogrammetry's been on my list of things to do for a few years now but normally put off by the cost of all the equipment needed, though like you say you can start with basics.
    Can you break down how long the scene with the stream and bridge over took?, roughly how long did the shoot take, how many photos taken, and how long in each piece of software?

    0

    Martin C

    ·6 years ago·
  • Paul

    Thank you Ben S ... glad you found it interesting. I haven't used a drone yet, however, I do plan to at some point soon

    0

    Paul

    ·6 years ago·
  • Ben S.

    Fascinating post. Do you also use drones to capture tall objects?

    0

    Ben S.

    ·6 years ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more