|
Post by Robert on Apr 12, 2019 5:37:15 GMT
Hi,
when using the suggested value of 0.00001 (implementing in Rust, using f32) to calculate the over_point all the tests pass, but the image ends up having acne:
If instead I change the value to 0.01 (only in prepare_computations), everything looks fine (and the tests still pass):
0.01 is a bit bigger than is to my liking... Has anyone else encountered this specific problem? I see from other threads that others have encountered "epsilon weirdness" and other issues with "shadows", but I've not seen this particular one...
|
|
|
Post by Jamis on Apr 12, 2019 12:18:04 GMT
Hello Robert,
Yeah, 0.01 is big enough to cause other problems down the road. It seems odd, too, that the acne is only occurring on the walls and floor, and not on the subject spheres. This makes me wonder if there might not be something off about how the transforms applied to the walls/floor (specifically, the scaling) are interacting with the computation of the normal vector in prepare_computations. (If this were so, though, I would also expect the lighting on those elements to be strange...)
One thing you might try to investigate: find one particular pixel in the acne'd version, and then modify your render function so it only renders that pixel. Then you can step through the code (if debuggers are your thing) or sprinkle your code with print statements (if you're printist) to get more information about the state of your scene and your renderer at that point.
Also, if your code is available anywhere for me to look at, I'd be happy to take a peek and see if anything jumps out at me.
- Jamis
|
|
|
Post by Robert on Apr 14, 2019 15:32:32 GMT
Hi Jamis,
thank you, both for the quick response and having written the book in the first place :-) I wrote a couple ray tracers years ago, but a unified set of test cases to compare against is really helpful :-)
Regarding the code: I'm a bit weary of making it available publicly, especially since I'm using the project to finally learn Rust :-P . I'll PM you a link and login ;-)
Otherwise I've been stepping through the (0,0) Pixel, which is conveniently one of the affected ones. The corresponding ray is: ray {...}raytracer::ray::Ray - origin {...}raytracer::tuple::Tuple x 0 f32 y 1.5 f32 z -5.00000048 f32 w 1 f32 - direction {...}raytracer::tuple::Tuple x -0.483618021 f32 y 0.154405043 f32 z 0.861552477 f32 w 0 f32 The hit is:
hit {...}raytracer::intersection::Intersection
t 7.42570925 f32 - object {...}raytracer::sphere::Sphere - transform {...}raytracer::matrix::Matrix4 - fields [4][[f32; 4]; 4] - 0 [4][f32; 4] 0 7.07106781 f32 1 -0.00707106758 f32 2 3.09086204e-07 f32 3 0 f32 - 1 [4][f32; 4] 0 0 f32 1 -4.37113873e-10 f32 2 -10 f32 3 0 f32 - 2 [4][f32; 4] 0 7.07106781 f32 1 0.00707106758 f32 2 -3.09086204e-07 f32 3 5 f32 - 3 [4][f32; 4] 0 0 f32 1 0 f32 2 0 f32 3 1 f32 The precomputation is:
comps {...}raytracer::intersection::Precomputation
t 7.42570925f32 + object {...}raytracer::sphere::Sphere - point {...}raytracer::tuple::Tuple x -3.59120679 f32 y 2.64656687 f32 z 1.39763784 f32 w 1 f32 - eyev {...}raytracer::tuple::Tuple x 0.483618021 f32 y -0.154405043 f32 z -0.861552477 f32 w 0 f32 - normalv {...}raytracer::tuple::Tuple x 0.706650615 f32 y 0.000335540884 f32 z -0.707562566 f32 w 0 f32 inside false bool - over_point {...}raytracer::tuple::Tuple x -3.59119964 f32 y 2.64656687 f32 z 1.39763081 f32 w 1 f32 is_shadowed then finds a hit on the same sphere:
hit {...}raytracer::intersection::Intersection
t 0.00124897843 f32 + object {...}raytracer::sphere::Sphere with the light being at a distance of:
distance 15.0017233 f32
|
|
|
Post by ascotti on Apr 14, 2019 15:49:20 GMT
Hi Robert, would it be possible to test with 64 bit floats instead?
|
|
|
Post by chrisd on Apr 14, 2019 15:52:31 GMT
Hey, Robert, I noticed you're using f32. I've never used Rust, but a quick look at the documentation shows that's a 32-bit variable. Maybe switching to f64 would be more accurate and accumulate fewer errors?
From the Rust documentation: "The default type is f64 because on modern CPUs it's roughly the same speed as f32 but is capable of more precision."
|
|
|
Post by Robert on Apr 14, 2019 17:03:22 GMT
Argh, that's what I get for forgetting that premature optimization is the root of all evil... The acne disappears when I use f64, however a few tests fail now (difference of 0.0001, thus larger than epsilon), I'll have to look into those. Thank you :-)
@jamis: Quick question: What bit-size did you use for the test values? e.g. using the epsilon of 0.00001 I have this failure, and I wonder whether the test values aren't meant to be 32-bit...:
---- material::tests::test_lighting_with_eye_in_the_path_of_the_reflection_vector stdout ---- thread 'material::tests::test_lighting_with_eye_in_the_path_of_the_reflection_vector' panicked at 'assertion failed: `(left == right)` left: `Color { red: 1.63638, green: 1.63638, blue: 1.63638 }`, right: `Color { red: 1.6363961030678928, green: 1.6363961030678928, blue: 1.6363961030678928 }`', src/material.rs:131:9
|
|
|
Post by chrisd on Apr 14, 2019 21:31:38 GMT
The suggested EPSILON on page xix in the book is 0.0001. For the sake of making it through the book, I suggest sticking to that. All of the tests are written assuming that value.
My copy of the book on page 87 shows 1.6364 as the expected value for this test, not 1.63638.
|
|
|
Post by Robert on Apr 15, 2019 8:28:23 GMT
We have different print runs ;-) My copy has no page xix, on page xvii it says 0.0001, but on page 5 it is 0.00001. According to the thread epsilon weirdness that's a mistake, but using 0.0001 for tests is still fine, while using 0.00001 is the recommended value for the "production" code. Given how I've implemented the operator overload for Equality though, I have to use the "production" value. The test is on page 87 as well, having 1.6364 and indeed my source code specifies 1.6364 as well (Now. I did have a deliberate change, as per the "weirdness" thread, based on a test failure when I was using f32), but that is one of those cases where decimal values are not accurately represented by floating points in memory. 1.63638 is what comes out of the actual memory contents via the built-in formatter during the test. According to a floating point converter I found, 1.6364 is represented as 1.6363999843597412109375 as single (f32) or 1.6364000000000000767386154620908200740814208984375 as double (f64).
TL;DR: I had adjusted that tests expected values so it passes with a uniform epsilon of 0.00001 for f32, I have now reverted that adjustment and it works with f64. I'd still like confirmation that f64 (double) is the intended precision though ;-)
|
|
|
Post by Jamis on Apr 15, 2019 12:42:46 GMT
Robert,
The tests all assume that floating point values are rounded to four decimal places (0.0001) before being compared; this is why I called the 0.00001 epsilon a "mistake". If you're unable to switch epsilon values (it's not simple for me either, in my own reference implementation), I recommend going with 0.0001 during development so that your tests align better with the book, but if you don't mind adjusting your tests to account for those rounding assumptions, you can use a higher-precision epsilon instead.
- Jamis
|
|
|
Post by David Aramant on Nov 1, 2019 2:29:03 GMT
I've run across the exact same issue. I'm using C# and using the SIMD optimized Vector4 & Matrix4x4 from System.Numerics, so it's not exactly trivial for me to change to a double-precision float. I'm kinda hoping that the problem goes away once real planes are implemented in the next chapter...
EDIT: Planes do not have the issue, but it doesn't really solve the problem. It seems to show up whenever a sphere is extremely deformed.
EDIT2: To get around it I made a FarOverPoint property on Computation that's shifted farther out and use that only for determining if a point is in shadow. We'll see if it causes more problems later.
|
|
|
Post by inventordave on Nov 12, 2019 15:03:29 GMT
I am implementing in Javascript, and while I don't get acne from my chapter 8 work, the chapter-end exercise for chapter 9 is giving me acne for the floor plane, but not the 2 spheres I have placed on it.... Here is an image: The floor is colour(1,0,0). I experimented with not calculating an over_point for planes, but that made no difference.... Attachments:
|
|
|
Post by Jamis on Nov 12, 2019 15:51:39 GMT
That's a lot more black than I would expect for an acne problem. I'd be curious to see what the computed values are within the lighting() function for a single black pixel of that plane. It might shed some light (ha!) on the issue.
- Jamis
|
|
|
Post by inventordave on Nov 12, 2019 16:39:48 GMT
I have added some debugging traps. When the first non-black pixel is reached, (which presumably should be colour(1,0,0), I get the following values in the lighting(…) function:
BEFORE "if (light_dot_normal < 0 || in_shadow) {":
effective_color = colour(0.065, 0.65, 0.325) lightv = vector(-0.5768...,0.5066...,-0.6408...) ambient = colour(0.0065,0.065,0.0325) light_dot_normal = 0.8098 in_shadow = false
Then, fails "if (light_dot_normal < 0 || in_shadow)", so...
diffuse = colour(0.0158...,0.1579...,0.0789...) … reflectv = vector(0.0993...,0.9756...,0.1958...) reflect_dot_eye = -0.25453...
Therefore, "if (reflect_dot_eye <= 0)" evaluates to true, so jump to last part of function, adding the colour components together:
(specular = colour(0,0,0))
result = colour(0.0223...,0.2229...,0.1115)
I hope this is what you asked for.... If there is no hit in color_at(), the colour(0,0,0) is returned, never gets to shade_hit(), therefore lighting()….
|
|
|
Post by inventordave on Nov 12, 2019 16:54:07 GMT
I have just noticed that sometimes color_at(w, r) is being passed an "undefined" ray. It is called in render(c, w). It means that this line: "var r = c.ray_for_pixel(x, y);" in render(c, w) is sometimes not returning a defined ray... Here is the code in the camera class/object definition for camera.ray_for_pixel(px, py)….
this.ray_for_pixel = function(px, py) { var xoffset = (px + 0.5) * this.pixel_size; var yoffset = (py + 0.5) * this.pixel_size; var world_x = this.half_width - xoffset; var world_y = this.half_height - yoffset; var input1 = inverse(this.transform); var input2 = point(world_x, world_y, -1); //debugger; var pixel = multiply_tuple_by_matrix(input1, input2); var origin = multiply_tuple_by_matrix(inverse(this.transform), point(0,0,0)); var direction = normalize(subtract(pixel, origin)); return new ray(origin, direction);
}
I don't know why it wouldn't return a ray, but ho-hum....
EDIT: I did a quick test, and origin and direction are always successfully created as tuples....
EDIT2: I think it may have been a false alarm, I think I had a brain-fart. But, see below...
|
|
|
Post by inventordave on Nov 13, 2019 8:41:04 GMT
I think this is interesting. To debug, I changed color_at() to emit colour(1,1,1) when no hit instead of black, and it rendered thus: So, the rays are hitting the plane....
|
|