Post by chrisd on Apr 21, 2019 18:25:47 GMT
I'm still making my way through the book (just finishing Chapter 9 - Planes). I've deviated slightly from the book here and there, and I have many ideas for other directions to take things. Some of them fit within the confines of the book, others are geared towards what to do after finishing the book. After writing them all down, I thought maybe these ideas would be of interest to others.
1. Both the Canvas and the Camera have a width and height, which seems slightly redundant. The book's render method creates a new Canvas based on the camera's width and height and then writes to it, returning the Canvas (only when it's all finished). Change it around so the render method takes in a pre-existing Canvas. This change may seem like busywork, but it enables suggestion #2 below. It would be nice to ultimately remove the width and height from the Camera object and just rely on those in the Canvas; unfortunately, it would break the tests in the book, so you might want to wait on that until after you've completed the book.
2. Creating PPM files and then opening them in Gimp or some other program can be a bit cumbersome. Try making a desktop app to contain all of the Putting It Together exercises. You can separate them out into one tab for each exercise, or a listbox or tree on the left to select the active exercise. Make a CanvasComponent that displays the Canvas. A render button would kick off the render, and when done tell the component to repaint and the image appears. You may notice that as the renders get longer, the UI is stalled during the rendering.
3. When a render is complete, display the completed render time in the UI. Exercise for the reader: What other statistics would you want to display?
5. Jamis often recommends isolating a render down to a single ray so you can more easily debug what is going on. Provide an option for clicking on a pixel to make it render (or re-render) just that pixel.
6. Create a "Save As" feature, where the user can save as PPM, PNG, JPG, OpenEXR, or other format. Don't re-invent the wheel ... Find libraries for formats other than PPM.
7. Render in a background thread, so the Canvas updates near-realtime. This is more easily doable if you did suggestions #1 and #2 above, since you can attach some sort of listener to the Canvas so it can tell the CanvasComponent to repaint as pixels are updated. Exercises for the reader: What do you have to do to avoid race conditions between the ray tracing thread and the UI thread? Does this affect the total render time? Can you do the screen updates in batches?
10. Provide a GUI option to change the resolution, with presets for standard resolutions (HD, VGA, SVGA, NTSC, etc.).
11. Instead of rendering pixels across the screen, line by line, create a kind of progressive render, with pixels in various parts of the image being drawn, and then progressively filling in the rest of the image. This would give a quick preview of a more complicated scene, so the user would have a better idea if the image was working or not and cancel without waiting for the whole thing to render. This may be especially interesting if you have multiple threads.
Here are some example URLs with variants on progressive rendering.
vimeo.com/41002555
blog.codinghorror.com/progressive-image-rendering/
12. There is an example scene in Appendix 1 in the book, and others on the forum - all in YML. Go ahead and incorporate a YAML parser into your system if you haven't already. Find all of the examples posted on the forum and try them out.
13. Other future directions
- Explore other shading models and see if they are worthwhile
- Implement other rendering methods - wireframe (hidden line removal), z-buffer
- Implement a spline-based surface object.
- Create a scene editor, including the ability to create/edit/delete objects in a tree structure and view them on screen, and to load, save, edit. This would be a lot of work.
- It is possible to create a 2D image and an addition depth image and upload them to Facebook, which can show them with a 3D effect. This requires depth information, which is available in the Intersection but so far has been thrown away once pixel color has been determined. Where would you save that information, and how would you save it? See here for an example in another program.
0. Try using Cucumber (or any of its variants or other application that reads gherkin feature files) for the tests instead of writing straight code or using unit testing frameworks. Although this is not a true BDD (Behavior Driven Development) exercise, it can be helpful way of learning the tool. Warning: There is some learning curve if you've never used it before.
2. Creating PPM files and then opening them in Gimp or some other program can be a bit cumbersome. Try making a desktop app to contain all of the Putting It Together exercises. You can separate them out into one tab for each exercise, or a listbox or tree on the left to select the active exercise. Make a CanvasComponent that displays the Canvas. A render button would kick off the render, and when done tell the component to repaint and the image appears. You may notice that as the renders get longer, the UI is stalled during the rendering.
3. When a render is complete, display the completed render time in the UI. Exercise for the reader: What other statistics would you want to display?
4. As suggested by many others, cache the inverse matrices. It makes a HUGE difference. I did it by making the Matrix.inverse method lazy-load and cache the result. If any values change using the Matrix.put(x, y, value) method, then clear the cached inverse. Exercise for the reader: Using the render time in suggestion #3, how much benefit does caching the inverse give you? Does caching transposed matrices make a difference?
5. Jamis often recommends isolating a render down to a single ray so you can more easily debug what is going on. Provide an option for clicking on a pixel to make it render (or re-render) just that pixel.
6. Create a "Save As" feature, where the user can save as PPM, PNG, JPG, OpenEXR, or other format. Don't re-invent the wheel ... Find libraries for formats other than PPM.
7. Render in a background thread, so the Canvas updates near-realtime. This is more easily doable if you did suggestions #1 and #2 above, since you can attach some sort of listener to the Canvas so it can tell the CanvasComponent to repaint as pixels are updated. Exercises for the reader: What do you have to do to avoid race conditions between the ray tracing thread and the UI thread? Does this affect the total render time? Can you do the screen updates in batches?
8. Implement a cancel option so you can cleanly abort a long-running render that doesn't look right. This item depends on suggestion #7.
9. Implement multi-threaded rendering, with different pixels rendering in different threads. This means that any update or lazy load methods (such as Matrix.put and Matrix.inverse) will need to be synchronized to avoid race conditions. Bonus exercise: Have a UI option to configure the number of background threads to be used.10. Provide a GUI option to change the resolution, with presets for standard resolutions (HD, VGA, SVGA, NTSC, etc.).
11. Instead of rendering pixels across the screen, line by line, create a kind of progressive render, with pixels in various parts of the image being drawn, and then progressively filling in the rest of the image. This would give a quick preview of a more complicated scene, so the user would have a better idea if the image was working or not and cancel without waiting for the whole thing to render. This may be especially interesting if you have multiple threads.
Here are some example URLs with variants on progressive rendering.
vimeo.com/41002555
blog.codinghorror.com/progressive-image-rendering/
12. There is an example scene in Appendix 1 in the book, and others on the forum - all in YML. Go ahead and incorporate a YAML parser into your system if you haven't already. Find all of the examples posted on the forum and try them out.
13. Other future directions
- Explore other shading models and see if they are worthwhile
- Implement other rendering methods - wireframe (hidden line removal), z-buffer
- Implement a spline-based surface object.
- Create a scene editor, including the ability to create/edit/delete objects in a tree structure and view them on screen, and to load, save, edit. This would be a lot of work.
- Implement animations, spline-based key frames, etc. Also a lot of work.
- Add the ability to render multiple frames of an animation, and assemble them into an animation.
- Implement a render farm, with clients that can connect to a server that passes out scene assignments. Exercise for the reader: How would the client get the data to be rendered?- It is possible to create a 2D image and an addition depth image and upload them to Facebook, which can show them with a 3D effect. This requires depth information, which is available in the Intersection but so far has been thrown away once pixel color has been determined. Where would you save that information, and how would you save it? See here for an example in another program.