-
Notifications
You must be signed in to change notification settings - Fork 17.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
io/ioutil: data race on blackHole #3970
Labels
Comments
How can you be sure what happens inside a blackhole? In more seriousness, do we care? The whole point of ioutil.Discard and its devNull type is to just throw away data. I don't care if multiple goroutines are throwing away data to the same place. What are you proposing? Changing the implementation or quieting this warning somehow? |
I agree that it's not particularly harmful. But it must be quieted somehow to not fail tests. I still do not have a suppression mechanism for Go, and I do not want to introduce it. So the only option for now is changing code. I do not know how to do it, though (I understand that memory allocation is not a good option performance-wise). Another point is that if several parallel goroutines write to the same memory it is slooooooow. |
Well, in C1x it would be an Undefined Behaviour, your box catches fire and launches nuclear missles. Go Memory Model is somwhat evasive on this. I also don't see a reasonable alternative for now. Can we leave this issue in some postponed state? It must be somehow solved for ThreadSanitizer - one way or another it must not produce reports here for user tests. To date I have no special means to "suppress" some reports, that would be good to preserve. One possible future solution would be - if/when runtime Processors are commited and we have support for per-Processor caching (there are exactly GOMAXPROCS Processors), the blackhole woule be a good candidate for such caching. |
This issue was closed.
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
The text was updated successfully, but these errors were encountered: