Re: [EdgeX-TSC-Core] CBOR - Re-encode Performance Metrics

Tobias Mosby
 

Yes, this is very helpful to see the metrics we can anticipate for re-encode, garbage collection, and memory utilization.

Thank you Anthony!

 

Are the timings you measured for re-encode in units of microseconds?  I re-ran and was looking to decode the actual value; will follow up with you.

 

This morning we discussed sharing anticipated response times for SHA256 vs MD5 checksum hashing algorithms.

Eric also suggested a look at XXHash so it is included below for comparison as well.

 

NOTES:

·         CBOR encode/decode was performed using go-codec v1.1.4 which is not yet added to EdgeX codebase. I also found that setting up a buffered reader adversely impacts CBOR encode performance.

·         Each run was provided same three payload sizes (small/medium/large), performed across 1k iterations respectively, with rolling average used to produce result set.

·         Only intentional difference between each run was to apply a different checksum method to compute the event/payload hash.

·         Takeaway: As seen in Run #3, xxHash.Checksum64 is demonstrably more performant. So if “weaker” (but speedy) hashes such as this were used in place of uuid and re-encode of the event model, mitigation of collisions may need to be addressed at persistence layer. For example, FIFO to resolve which event record to mark “pushed” if/when distinct events are assigned the same hash value; ref: https://github.com/Cyan4973/xxHash/issues/165

 

Run #1

Dimension

100kb Payload

900kb Payload

12MB Payload

Binary Image Load:

53µs

439µs

3850µs

CBOR Encode:

33µs

177µs

1700µs

Checksum (SHA256):

598µs

6000µs

85000µs

CBOR Decode:

90µs

1000µs

81000µs

Overall Iteration Time:

0.75ms

7.5ms

171ms

TOTAL Time (1000 iterations)

0.8s

7.5s

2m51s

 

Run #2

Dimension

100kb Payload

900kb Payload

12MB Payload

Binary Image Load:

49µs

452µs

3900µs

CBOR Encode:

50µs

205µs

1900µs

Checksum (MD5):

208µs

1900µs

24000µs

CBOR Decode:

63µs

980µs

77000µs

Overall Iteration Time:

0.37ms

3.5ms

107ms

TOTAL Time (1000 iterations)

0.4s

3.5s

1m47s

 

Run #3

Dimension

100kb Payload

900kb Payload

12MB Payload

Binary Image Load:

46µs

411µs

4000µs

CBOR Encode:

36µs

182µs

1800µs

Checksum (xxHash):

15µs

121µs

1700µs

CBOR Decode:

64µs

805µs

84000µs

Overall Iteration Time:

0.16ms

1.5ms

91ms

TOTAL Time (1000 iterations)

162ms

1.5s

7m36s

 

*Resulting metrics in the tables above must be taken as anecdotal (mileage may vary) since these were executed on my laptop in a VM with other services such as EdgeX stack running.

 

Best regards,
Toby

 

From: EdgeX-TSC-Core@... [mailto:EdgeX-TSC-Core@...] On Behalf Of Anthony Bonafide
Sent: Monday, April 22, 2019 2:14 PM
To: EdgeX-GoLang@...; edgex-tsc-core@...
Subject: [EdgeX-TSC-Core] CBOR - Re-encode Performance Metrics

 

Hello all,

 

    Last week during the Core Working Group call we were discussing a couple of ways to address some issues regarding capturing ID and time information when dealing with CBOR content in core-data. One option was to decode the original content, update the necessary data and re-encode to CBOR before publishing. Before going down that path we wanted to grab some performance data on the re-encoding process. I have created a Github repo which simulates the process of re-encoding similar to how we would want to implement the previously mentioned steps. There are instructions in the README on how to run the benchmark tests, along with the test structure.

 

     In a nutshell, the benchmark accepts 2 arguments. The first specifies how many iterations to execute, the second specifies the Event payload size. The options for the payload size are small, medium, and large which correspond to 100K,900K,and 12M respectively(to match the results shared by Tobias regarding checksum implementation). The logic for the bench mark works as follows:

  1. Create CBOR bytes of an Event with a payload of the specified size
  2.  Capture system metrics and capture the start time
  3. Decode CBOR to an Event
  4. Change the ID and Pushed fields
  5. Encode the updated Event
  6. Repeat steps 3-5 as per the specified number of iterations
  7. Calculate elapsed time
  8. Display metrics

 

    A run of the tests on my laptop resulted in the following:

 

Metric

Event with 100K payload

Event with 900K payload

Event with 12M payload

Average Re-encode time(nano seconds)

4148

4497

4554

GC runs

590

755

186

GC stop the world time(milliseconds)

57

71

21

Memory allocations(Memory at test completion – memory before test)

30000096

30000148

30000046

System memory in bytes (Memory test completion – memory before test)

1179648

1441792

1179648

 

    There is more information provided by the CLI tool, but those are what I felt might be the most important. Also, there are a few Go Benchmark tests in the repo which benchmark encoding, decoding, and the re-encoding process as listed above. Each of the tests are isolated and provide another way to get performance metrics. The instructions for executing the Go Benchmark tests can also be found in the Go Benchmark Section of the README. Hopefully this helps and if anyone has any ideas or suggestions regarding more and/or better metrics feel free to reach out.

 

 

Thank you,

 

Anthony Bonafide

Join EdgeX-GoLang@lists.edgexfoundry.org to automatically receive all group messages.