Performance comparison of read operations between Dictionary and FrozenDictionary

Similar Posts

3 Comments

  1. Marius Ungureanu says:

    The perf hit with records comes from the record generated code.

    Normally, GetHashcode is overridden for types. For the custom type usage before the record version, that uses a RuntimeHelpers.GetHashCode – basically a hash associated with that instance’s allocation. Then it uses object.ReferenceEquals, in case of hash collision -> bucket traversal. It doesn’t care about the fields you declared. You can probably make it as slow as record by manually overiding GetHashCode and Equals.

    For the record type, it has to compute the combined hashcode, so it has multiple calls to hashcode (toplevel + each field), sometimes even recursively. On hash collision, you end up with some more overhead, since structural equality has to compare all the fields themselves.

    Since you have 3 pointer accesses at least in the record type (key, field1, field2 – GetHashCode and Equals) depending on how the strings are initialized, that might lead to cache misses. Using something like a primitive might reduce the numbers a bit, but it would still possibly try to hash each primitive.

    The numbers might improve quite a bit if IEquatable is implemented on your type and caching any hashcode if types are immutable.

    Hope this sheds some light into why the numbers go up.

  2. Hi Marius. Thanks for the nice post. Did you measure the performance again with .NET release? It would be nice to see if something has changed, or do you have the code to share?

Leave a Reply

Your email address will not be published. Required fields are marked *

Are you a human? *