This one belongs to the chapter “it’s the small things that count”: On a number of occasions, the area of user input is not the whole render area – or in other words – the render area is larger than what is considered the view into the scene. This usually happens if some controls are drawn on top of the rendering area. Although the 3D scene can still be seen between the controls, when doing a “fit to view” the scene should not be positioned behind those controls as the user won’t be able to manipulate or properly see it there. What we need is an “overdraw area” where 3D content is rendered but no relevant parts are positioned there. In macOS, the typical use case is using an NSEffectsView. It needs content behind it to render its effect but the user cannot properly see what’s behind.
I have been tweaking the UI code for the macOS variant of Shapeflow over the last couple of days. It’s surprisingly difficult to get rid of all the kinks and quirks. Two interesting aspects were how to get on-demand rendering working (e.g. only draw a new frame if something really has changed) and drawing only the active document tab.
Over the last couple of months, I got some great feedback on my article about SVN/GIT policies. How to use a VCS “correctly” seems to be a hotly debated topic in pretty much every company and there are lots misconceptions out there. Since the article proofed to be a good point of reference for starting a more in depth discussion, I’ve updated and extended it with some new points:
- a short discussion on why most developers use their VCS more like a backup tool than a tool for collaboration
- comparing regularly reading the commit log to reading the news paper
- added a section on work-in-progress branches as a symptom of “no feature branches” policies
- comparing the reasoning behind GIT rebase vs GIT merge
If you’ve got thoughts/feedback, please feel encouraged to send it to me. I’m particularly interested in anecdotes as experimental proof of why certain policies result in specific outcomes.
I remember reading a CVS book back in 2002 or so. It had a quote saying “coding without a versioning system is like parachuting without a parachute”. I always liked this quote as it captured what an essential role version control plays in programming. The world has changed a lot since then. Now almost everyone uses a GIT or SVN as the integration into IDEs has gotten standard and sites like GitHub make it easier than ever. The problem with it is the fact that simply having a version control system (VCS) is only part of the solution: one also has to have the right mindset and policies in place to use it effectively because otherwise, one may very well have a parachute but keep fighting with it as one gets tangled in the ropes.
This is a guide that presents best practices on working with a version control system. It is primarily motivated by all the companies I have visited or worked at where the VCS had gone awry at some point and of course those where it worked like a charm. I assume that the reader is familiar with the basic concepts of a commit or branch and has actively worked with a VCS such as SVN or GIT.
Well, here is something I wanted to do for a long time: create my own surroundings. And the first step for this is of course to create a 360/180 panorama image to use as an env-map. After some unsuccessful first attempts, I finally managed to get a decent result. So here are a couple of tips for anyone else trying to do this.
As a quick side project, I’ve started working on the problem that postal code area information in Openstreetmap is often insufficient. Why? Because it’s a nice show case of how flexible the Core SDK is, allows me to stress test the 2D / orthogonal handling code paths and it is a good opportunity to potentially contribute to this great, crowd sourced project.
The render layout sub-system in Core SDK is one of its central features. It helps compose multiple views or multiple output displays to a single consistent layout and is one of the reason why the SDK is so versatile. But it also helps managing input mapping and other aspects such as stereoscopic VR/AR rendering. Here is how:
After adding FBX support the other week, there was yet another reason to finally add support for normal and specular maps. While it’s still not physically-based-rendering but a simple Phong lighting model, it’s still a nice improvement to the overall image quality and helped developing some further multipurpose code.