Re: Export Go - for preview

Dellabianca, Alberto
 

Hi all,

My 2 cents on the Message Bus discussion.

 

There are pros and cons related to broker / brokerless... trying to combine simplicity, latency requirements and single point of failure concerns.

There is no perfect solution. In critical control environments we use DDS but that would be an overcomplication for a system like EdgeX.

 

In a Go world, a proposal could be a phased approach with a simple broker infrastructure as first implementation and a brokerless option in a second release.

 

Release I --> NATS

Release II --> NATS (default), gRPC (alternative option)

 

 

In a NATS world, Device Services would publish messages to the broker. Subscribers could be core-data, rules-engine and export-distribution. Rules-engine could be a publisher as well since the results of a calculation could be written back to NATS as a new topic.

 

NATS:

 

 

 

 

In a gRPC world, all microservices could talk to each other with a point-to-point approach opening channels. This is similar to REST but would support streaming and higher throughputs.

 

gRPC:

 

 

 

 

Alberto

 

 

 

 

 

 

-----Original Message-----
From: edgex-golang-bounces@... [mailto:edgex-golang-bounces@...] On Behalf Of James.White2@...
Sent: Tuesday, February 20, 2018 9:07 AM
To: fclaramonte@...; drasko@...; dejan@...; edgex-golang@...
Cc: manuel@...; Gorka.Garcia@...
Subject: [EXTERNAL] Re: [Edgex-golang] Export Go - for preview

 

Dell - Internal Use - Confidential 

 

 

Thanks for this Fede.  For our Go Lang project call today, I have as one of the agenda items to discuss this a bit.  Is it time, perhaps, to discuss replacing 0MQ and if so with Nats, gRPC, etc. that is Go cross compiling?  Or do we just say that for this release (California), you have to run containerized on a linux based container for core-data and export-distro and all development of those has to happen on linux to be able to do the 0MQ required compilation?

 

 

I think this will be a good discussion.

 

 

-----Original Message-----

From: Fede Claramonte [mailto:fclaramonte@...]

Sent: Tuesday, February 20, 2018 8:36 AM

To: Drasko DRASKOVIC <drasko@...>; White2, James <James_White2@...>; dejan.mjc <dejan@...>

Cc: chencho <smunoz@...>; manuel@...; darko@...; Garcia, Gorka <Gorka.Garcia@...>; Janko Isidorovic <janko@...>; Jeremy Phelps <jphelps@...>

Subject: Re: Export Go - for preview

 

 

Right now the problem with having dependencies using cgo are:

- Cross compilation, there is no simple way to crosscompile to another architecture when using cgo

- Use 'from scratch' dockers. When compiling with cgo enabled go binaries have a dependency with the glibc used in compilation, from what I can understand from the log of the compilation (pasted below)

 

 

Fede

 

 

feclare@serrano:~/projects/gopath/src/github.com/edgexfoundry/edgex-go$

make cmd/core-data/core-data

CGO_ENABLED=1 go build -ldflags "-X main.version=0.5.0 -extldflags '-lstdc++ -static -lsodium -static -lzmq -lpgm -static -lpthread -static -lm -static' -v " -o cmd/core-data/core-data ./cmd/core-data # https://urldefense.proofpoint.com/v2/url?u=http-3A__github.com_edgexfoundry_edgex-2Dgo_cmd_core-2Ddata&d=DwIGaQ&c=oTZJxWjXA97U5efbDJZGEg&r=ng_tkMHyi7CGJozkvpxBfnH8oKpxLQBNTCY3LrRYwLU&m=wCsr92DqHd_iQG0VP-eIfYIy0eGsx2jmEPo_KUtdOmA&s=NF4JujIr4XOYdRglzPPHCIVT4853mbe0ZsOvWoCBvaY&e=

HEADER = -H4 -T0x401000 -D0x0 -R0x1000

searching for runtime.a in $WORK/runtime.a searching for runtime.a in /home/feclare/projects/gopath/pkg/linux_amd64/runtime.a

searching for runtime.a in /usr/lib/go-1.8/pkg/linux_amd64/runtime.a

 0.00 deadcode

 0.08 pclntab=1263809 bytes, funcdata total 172301 bytes

 0.11 dodata

 0.14 dwarf

 0.16 symsize = 0

 0.20 reloc

 0.22 asmb

 0.22 codeblk

 0.23 datblk

 0.24 sym

 0.24 symsize = 222144

 0.25 symsize = 224088

 0.25 dwarf

 0.28 headr

 0.30 host link: "gcc" "-m64" "-gdwarf-2" "-o"

"/tmp/go-build995971213/https://urldefense.proofpoint.com/v2/url?u=http-3A__github.com_edgexfoundry_edgex-2Dgo_cmd_core-2Ddata_-5Fobj_exe_a.out&d=DwIGaQ&c=oTZJxWjXA97U5efbDJZGEg&r=ng_tkMHyi7CGJozkvpxBfnH8oKpxLQBNTCY3LrRYwLU&m=wCsr92DqHd_iQG0VP-eIfYIy0eGsx2jmEPo_KUtdOmA&s=JeJMMOEEPLU3_tIMkzaT2YWAlwMERKWO5OZRUtE1w0o&e="

"-static" "/tmp/go-link-683737096/go.o"

"/tmp/go-link-683737096/000000.o" "/tmp/go-link-683737096/000001.o"

"/tmp/go-link-683737096/000002.o" "-g" "-O2" "-g" "-O2" "-lpthread" "-g"

"-O2" "-lzmq" "-no-pie" "-lstdc++" "-static" "-lsodium" "-static"

"-lzmq" "-lpgm" "-static" "-lpthread" "-static" "-lm" "-static"

/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/libzmq.a(src_libzmq_la-ipc_listener.o):

In function `zmq::ipc_listener_t::filter(int)':

(.text+0xd60): warning: Using 'getgrgid' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking

(.text+0xd43): warning: Using 'getpwuid' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking

/tmp/go-link-683737096/000000.o: In function

`_cgo_cb4050e05860_C2func_getaddrinfo':

/build/golang-1.8-GaFns3/golang-1.8-1.8.3/src/net/cgo-gcc-prolog:46:

warning: Using 'getaddrinfo' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking

/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/libpgm.a(libpgm_noinst_la-getnodeaddr.o):

In function `pgm_getnodeaddr':

(.text+0x28a): warning: Using 'gethostbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking

/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/libpgm.a(libpgm_noinst_la-getnetbyname.o):

In function `pgm_getnetbyname':

(.text+0x321): warning: Using 'getnetent' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking

(.text+0x31c): warning: Using 'setnetent' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking

(.text+0x482): warning: Using 'endnetent' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking

/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/libpgm.a(libpgm_noinst_la-getprotobyname.o):

In function `_pgm_native_getprotobyname':

(.text+0x42): warning: Using 'getprotobyname_r' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking

 1.46 cpu time

96145 symbols

88588 liveness data

 

 

 

 

 

 

On 19/02/18 20:36, Drasko DRASKOVIC wrote:

> On Mon, Feb 19, 2018 at 5:11 PM,  <James.White2@...> wrote:

>> Dell - Internal Use - Confidential

>> 

 

>> I think this is the 0MQ stuff right Fede??

> I am sure Fede will find the way to compile ZMQ. It is a popular and

> widely used lib, I doubt that we can not handle the compilation...

 

> Fede,

> can you please post the logs if you run into the trouble and we can

> debug together.

 

>> Something we need to eventually revisit - replacing 0MQ with

>> something like Nats or gRPC so that we don't have to do any C

>> compiling and everything is Go.  Always more work guys  :)

> We have to decide if we want centralized broker or decentralized async

> comm like it is today. Both approaches have benfits and drawbacks.

> Centralized broker (NATS) would probably simplify the design a lot, as

> every service will just send the events to a topic to a known broker

> (no need for complicated service discovery), and then whichever

> service needs to consume these events will just subscribe to a topic.

> On the other hand - it is a SPOF, but this can be solved with FT

> deployment.

 

> ZMQ is a good choice, although that I would personally select nanomsg:

> https://urldefense.proofpoint.com/v2/url?u=http-3A__nanomsg.org_&d=DwIGaQ&c=oTZJxWjXA97U5efbDJZGEg&r=ng_tkMHyi7CGJozkvpxBfnH8oKpxLQBNTCY3LrRYwLU&m=wCsr92DqHd_iQG0VP-eIfYIy0eGsx2jmEPo_KUtdOmA&s=BmS0utrggPir5iXsIXapgZ9DTaO7K6RdX6iiOHPPRkI&e=, which is a successor of ZMQ (same author). Some

> interesting info:

> https://urldefense.proofpoint.com/v2/url?u=https-3A__bravenewgeek.com_a-2Dlook-2Dat-2Dnanomsg-2Dand-2Dscalability-2Dprotocols_&d=DwIGaQ&c=oTZJxWjXA97U5efbDJZGEg&r=ng_tkMHyi7CGJozkvpxBfnH8oKpxLQBNTCY3LrRYwLU&m=wCsr92DqHd_iQG0VP-eIfYIy0eGsx2jmEPo_KUtdOmA&s=QDYiIWFFMO7cXtQn32Laqn65ycyVv-sRUkjymnrALCk&e=,

> and there is also Go-native implementation:

> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_go-2Dmangos_mangos&d=DwIGaQ&c=oTZJxWjXA97U5efbDJZGEg&r=ng_tkMHyi7CGJozkvpxBfnH8oKpxLQBNTCY3LrRYwLU&m=wCsr92DqHd_iQG0VP-eIfYIy0eGsx2jmEPo_KUtdOmA&s=sQyL3CvmJNT4YvUxu1OR8d_YEPZW_XmDQwod2RTkmmE&e=.

 

> gRPC would be probably easier to implement and maintain, but it is

> HTTP-based protocol (not TCP, like ZMQ), so some overhead comes with

> it. Here is a nice comparison:

> https://urldefense.proofpoint.com/v2/url?u=https-3A__stackoverflow.com_questions_39350681_grpc-2Dand-2Dzeromq-2Dcomparsion&d=DwIGaQ&c=oTZJxWjXA97U5efbDJZGEg&r=ng_tkMHyi7CGJozkvpxBfnH8oKpxLQBNTCY3LrRYwLU&m=wCsr92DqHd_iQG0VP-eIfYIy0eGsx2jmEPo_KUtdOmA&s=oKq2ryhDyhiVYfFeMtaASx0-BduELexSpPwpI2-DRMk&e=.

> I am not sure that in our lightway approach gRPC is better choice than

> ZMQ/nanomsg. I expect higher throughput with ZMQ

> (https://urldefense.proofpoint.com/v2/url?u=https-3A__bbengfort.github.io_observations_2017_09_04_message-2Dthroughpu&d=DwIGaQ&c=oTZJxWjXA97U5efbDJZGEg&r=ng_tkMHyi7CGJozkvpxBfnH8oKpxLQBNTCY3LrRYwLU&m=wCsr92DqHd_iQG0VP-eIfYIy0eGsx2jmEPo_KUtdOmA&s=sAPFVuNDEwzlOte5Ngo-taDM2g3_aenAWHlGDaJ6bmU&e=

> t.html)

> - although not much higher, since there is just a few clients

> connected. I also expect high throughput with NATS, but pure TCP

> socket that ZMQ/nanomsg uses is practically unbeatable:

> https://urldefense.proofpoint.com/v2/url?u=https-3A__bravenewgeek.com_dissecting-2Dmessage-2Dqueues_&d=DwIGaQ&c=oTZJxWjXA97U5efbDJZGEg&r=ng_tkMHyi7CGJozkvpxBfnH8oKpxLQBNTCY3LrRYwLU&m=wCsr92DqHd_iQG0VP-eIfYIy0eGsx2jmEPo_KUtdOmA&s=n_JKT2l8uzHGYDg6ZIu9xgX0RHYG1SnLcirLtyhAfv0&e=

 

> I propose that we make everything work with ZMQ for now, to have a

> stable MVP. We can experiment later in separate experimental branches

> in replacing the communication mechanisms and see how thi affects

> performance and improves/simplifies the system design.

 

> Best regards,

> Drasko DRASKOVIC

> Mainflux Author and Technical Advisor

 

> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mainflux.com&d=DwIGaQ&c=oTZJxWjXA97U5efbDJZGEg&r=ng_tkMHyi7CGJozkvpxBfnH8oKpxLQBNTCY3LrRYwLU&m=wCsr92DqHd_iQG0VP-eIfYIy0eGsx2jmEPo_KUtdOmA&s=tU3el7exroVgyVBSDozGtlcM-CTXHPFGdhH5Nm2nT-0&e=   |  Industrial IoT Cloud

> -------------------------------------------------------------------

> Engineering Division |   Paris, France

 

> LinkedIn: https://urldefense.proofpoint.com/v2/url?u=https-3A__www.linkedin.com_in_draskodraskovic&d=DwIGaQ&c=oTZJxWjXA97U5efbDJZGEg&r=ng_tkMHyi7CGJozkvpxBfnH8oKpxLQBNTCY3LrRYwLU&m=wCsr92DqHd_iQG0VP-eIfYIy0eGsx2jmEPo_KUtdOmA&s=0w0xPRHHCQo39UVt4jn1ZqtFWXLtA7fH7OCd2tm0ldw&e=

> Twitter: @draskodraskovic

 

 

 

_______________________________________________

EdgeX-GoLang mailing list

EdgeX-GoLang@...

https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.edgexfoundry.org_mailman_listinfo_edgex-2Dgolang&d=DwIGaQ&c=oTZJxWjXA97U5efbDJZGEg&r=ng_tkMHyi7CGJozkvpxBfnH8oKpxLQBNTCY3LrRYwLU&m=wCsr92DqHd_iQG0VP-eIfYIy0eGsx2jmEPo_KUtdOmA&s=GRfZIthq58xKEy6k4JYMjEjQhP_OQX517NOqZh9l7Bk&e=

Join EdgeX-GoLang@lists.edgexfoundry.org to automatically receive all group messages.