Topics

export microservices design

Fede Claramonte
 

Hi all,

After some time working on the distro services, there are some concerns I have that I would like to bring up for discussion

There are two microservices, client and distro. client manages the export registrations (creation/modification/delete) and store them in mongo. Then distro export to each registered service every event is received from data microservice.

I have these issues:

- Both microservices share the same mongo database to get access to the registration data. I think this is against the microservices guidelines you were discussing.

- There is no mechanism to client microservice to notify distro that the registrations data has changed. Java distro is reading from mongo the registrations for every event it receive. I don't think this scales when there are lots of events.

- I don't understand the need to have two different microservices in this case.

I propose one of the following changes:

- Merge both microservices. They are both needed and it does not have any sense to have one running without the other. So merging them simplify the design and there isn't any flexibility lost.

- Add a API in distro to be notified when there is a change in any registrations. distro can update them from mongo and there is no need of more polling.

Regards,

Fede

Drasko DRASKOVIC
 

Hi Fede,

On Mon, Oct 30, 2017 at 2:30 PM, Fede Claramonte
<fclaramonte@...> wrote:
Hi all,

After some time working on the distro services, there are some concerns I
have that I would like to bring up for discussion

There are two microservices, client and distro. client manages the export
registrations (creation/modification/delete) and store them in mongo. Then
distro export to each registered service every event is received from data
microservice.

I have these issues:

- Both microservices share the same mongo database to get access to the
registration data. I think this is against the microservices guidelines you
were discussing.

- There is no mechanism to client microservice to notify distro that the
registrations data has changed. Java distro is reading from mongo the
registrations for every event it receive. I don't think this scales when
there are lots of events.

- I don't understand the need to have two different microservices in this
case.

I propose one of the following changes:

- Merge both microservices. They are both needed and it does not have any
sense to have one running without the other. So merging them simplify the
design and there isn't any flexibility lost.
I guess that initial motivation in breaking this in microservices was
to separate responsibility and make design clearer. Although, as you
said, one can't run without other, that does not mean that we can not
separate the code (nothing for example can not run without core,
etc...).

However, if coupling is high, then I am not against this merge. It
will for sure simplify the development.

Maybe division should be done in different way, where we would enbable
just the connectors as a plugins, so that people can add them and
select only connectors that they actually need in deployment?


- Add a API in distro to be notified when there is a change in any
registrations. distro can update them from mongo and there is no need of
more polling.
I am definitely for notifications. Moreover, there is a global flow of
notifications (from core, etc), that I start to believe that one
central pub/sub broker could simplify the design a lot. I think in
that case we would not even need ZMQ or similar RPC methods, because
all services could communicate via centralized pub/sub service.

Definitely +1 for notifications, and we can add this to client, but I
have following questions - if we send you notifications form client,
then you must cache somewhere these changes in distro. Do you find
this OK, or would you rather that client exports an API through which
distro could obtain the data without hitting directly MongoDB that
belongs to the client microservice.

Best regards,
Drasko DRASKOVIC
Mainflux Author and Technical Advisor

www.mainflux.com | Industrial IoT Cloud
-------------------------------------------------------------------
Engineering Division | Paris, France

LinkedIn: https://www.linkedin.com/in/draskodraskovic
Twitter: @draskodraskovic

James.White2@...
 

Hi Fede,
as always, you have identified some short comings which are on target. Drasko has noted some of the same issues.
Yes - we did not get a chance to develop an export data client library so that the export services could have some isolation with regard to database - particularly export distro. It would be good not to have distro have to have Mongo (or other DB) references directly.

Yes - there needs to be callbacks in place to allow distro to be notified of changes. That would allow distro to contain a "cache" of client registrations and not have to read from the DB each time. TO DO notes in the old Fuse code will outline this exact need.

The reason for two different services was to allow the export-client to, in some cases to be removed as it might be provided by a system management tool or other UI application that allows for more user friendly deposit of export client registrations. Distro therefore, was made a separate service.

As the Applications Working Group (Janko, Drasko, Darko, you, Gorka, and others) looks at these services, and/or re-implements them in Go, I encourage you all to make choices and decisions that you think are right for this area. I personally do like the idea of keeping the registration and distro functionality separate so as to allow for different implementations of either or removal of client when other 3rd party applications take care of it, but you could make a very effective argument to combine them and I would not argue either.

So yes - please add and update these services as you all see in the best interest of the community. Our Dell implementation was a first cut, but certainly needs work.



________________________________________
From: edgex-devel-bounces@... <edgex-devel-bounces@...> on behalf of Fede Claramonte <fclaramonte@...>
Sent: Monday, October 30, 2017 8:30 AM
To: edgex-devel@...
Subject: [Edgex-devel] export microservices design

Hi all,

After some time working on the distro services, there are some concerns
I have that I would like to bring up for discussion

There are two microservices, client and distro. client manages the
export registrations (creation/modification/delete) and store them in
mongo. Then distro export to each registered service every event is
received from data microservice.

I have these issues:

- Both microservices share the same mongo database to get access to the
registration data. I think this is against the microservices guidelines
you were discussing.

- There is no mechanism to client microservice to notify distro that the
registrations data has changed. Java distro is reading from mongo the
registrations for every event it receive. I don't think this scales when
there are lots of events.

- I don't understand the need to have two different microservices in
this case.

I propose one of the following changes:

- Merge both microservices. They are both needed and it does not have
any sense to have one running without the other. So merging them
simplify the design and there isn't any flexibility lost.

- Add a API in distro to be notified when there is a change in any
registrations. distro can update them from mongo and there is no need of
more polling.

Regards,

Fede

_______________________________________________
EdgeX-Devel mailing list
EdgeX-Devel@...
https://lists.edgexfoundry.org/mailman/listinfo/edgex-devel

Drasko DRASKOVIC
 

Hi Jim,

On Mon, Oct 30, 2017 at 3:58 PM, <James.White2@...> wrote:
Hi Fede,
as always, you have identified some short comings which are on target. Drasko has noted some of the same issues.
Yes - we did not get a chance to develop an export data client library so that the export services could have some isolation with regard to database - particularly export distro. It would be good not to have distro have to have Mongo (or other DB) references directly.

Yes - there needs to be callbacks in place to allow distro to be notified of changes. That would allow distro to contain a "cache" of client registrations and not have to read from the DB each time. TO DO notes in the old Fuse code will outline this exact need.
This looks good to me, just one question regarding notifications.
Let's say distro boots after client (or is restarted) - how will it
get connections to fill in it's cache? I think client must export some
HTTP (or ZMQ or gRPC) API so that distro always fetches the
credentials for the first time it needs it. From this point on it can
cache it, and if we have this caching then notifications are needed
from the client side to update cache in distro and keep it consistent.

But I think first step would be exporting a simple API on the client
service, so that distro can use it to read DB instead going directly
into this DB.

The reason for two different services was to allow the export-client to, in some cases to be removed as it might be provided by a system management tool or other UI application that allows for more user friendly deposit of export client registrations. Distro therefore, was made a separate service.
The second limitation that comes with this replacement is that distro
will expect that custom service that plays the role of registration
client exposes aforementioned API (and if we add caching, then will
expect notifications to come). This imposes some (although not
significant) requirements for custom client registration service. But
some of these requirements exist even today - like format that is
written into the DB ny the client, so that distro can read it and
parse it...

Best regards,
Drasko DRASKOVIC
Mainflux Author and Technical Advisor

www.mainflux.com | Industrial IoT Cloud
-------------------------------------------------------------------
Engineering Division | Paris, France

LinkedIn: https://www.linkedin.com/in/draskodraskovic
Twitter: @draskodraskovic

Fede Claramonte
 

Jim, Drasko

Client already have an api(/api/v1/registration) that can be used to obtain all registrations. It should be easy to use it instead of mongo. If we keep them as different microservices distro can obtain all registrations from client when it is notified of changes or starting.

But I will suggest to merge both microservices in the go implementation. The api and the behavior will be same as before and we don't need to make any change in design. It allows us to complete the export port without modifying the java implementation. If we want to have something similar to only have running distro we can add a command line flag to disable the api rest and read the registrations list from file.

And when the new messaging/pubsub mechanism is chosen we can think again about the best path to implement it.

About the plugins, I think they are a good option but I have mixed feelings about it. Don't know if it fits better in a microservice architecture to have a different microservices for each connector. In my opinion the compromise of having the more generic connectors in distro (as it is today) and more custom connectors in it's own microservice could be a good trade off.

Fede

On 30/10/17 17:10, Drasko DRASKOVIC wrote:
Hi Jim,

On Mon, Oct 30, 2017 at 3:58 PM, <James.White2@...> wrote:
Hi Fede,
as always, you have identified some short comings which are on target. Drasko has noted some of the same issues.
Yes - we did not get a chance to develop an export data client library so that the export services could have some isolation with regard to database - particularly export distro. It would be good not to have distro have to have Mongo (or other DB) references directly.

Yes - there needs to be callbacks in place to allow distro to be notified of changes. That would allow distro to contain a "cache" of client registrations and not have to read from the DB each time. TO DO notes in the old Fuse code will outline this exact need.
This looks good to me, just one question regarding notifications.
Let's say distro boots after client (or is restarted) - how will it
get connections to fill in it's cache? I think client must export some
HTTP (or ZMQ or gRPC) API so that distro always fetches the
credentials for the first time it needs it. From this point on it can
cache it, and if we have this caching then notifications are needed
from the client side to update cache in distro and keep it consistent.

But I think first step would be exporting a simple API on the client
service, so that distro can use it to read DB instead going directly
into this DB.

The reason for two different services was to allow the export-client to, in some cases to be removed as it might be provided by a system management tool or other UI application that allows for more user friendly deposit of export client registrations. Distro therefore, was made a separate service.
The second limitation that comes with this replacement is that distro
will expect that custom service that plays the role of registration
client exposes aforementioned API (and if we add caching, then will
expect notifications to come). This imposes some (although not
significant) requirements for custom client registration service. But
some of these requirements exist even today - like format that is
written into the DB ny the client, so that distro can read it and
parse it...

Best regards,
Drasko DRASKOVIC
Mainflux Author and Technical Advisor

www.mainflux.com | Industrial IoT Cloud
-------------------------------------------------------------------
Engineering Division | Paris, France

LinkedIn: https://www.linkedin.com/in/draskodraskovic
Twitter: @draskodraskovic

Drasko DRASKOVIC
 

Hi Fede,

On Tue, Oct 31, 2017 at 10:48 AM, Fede Claramonte
<fclaramonte@...> wrote:
Jim, Drasko

Client already have an api(/api/v1/registration) that can be used to obtain
all registrations. It should be easy to use it instead of mongo. If we keep
them as different microservices distro can obtain all registrations from
client when it is notified of changes or starting.
Does distro need **all** registrations that exist in the client DB at
start, or it can demand one by one when it needs it and then put it in
the cache? I.e. why would you pick all that is in client database,
when you want to use credentials for the stream/user X only when this
message comes from the core. Then you will demand a client to read for
you credentials via client's API and store it in your local cache. You
will at the same time subscribe to notifications coming from the
client for this particular stream (not other), and see if it's
credentials change.

Does this sound reasonable?


But I will suggest to merge both microservices in the go implementation. The
api and the behavior will be same as before and we don't need to make any
change in design. It allows us to complete the export port without modifying
the java implementation. If we want to have something similar to only have
running distro we can add a command line flag to disable the api rest and
read the registrations list from file.
I am not against this, I am just trying to explore options we have
today (actually I am trying to better understand current architecture
myself). Jim mentioned that only client could be potentially replaced,
and it makes sense that in that case we just do not start REST API of
the client in the monolithic binary, but it would be even better if we
do not even have to include the code.

I would really like to hear Dejan's opinion on this also, as he has a
lot of experience in building these architectures.


And when the new messaging/pubsub mechanism is chosen we can think again
about the best path to implement it.

About the plugins, I think they are a good option but I have mixed feelings
about it. Don't know if it fits better in a microservice architecture to
have a different microservices for each connector. In my opinion the
compromise of having the more generic connectors in distro (as it is today)
and more custom connectors in it's own microservice could be a good trade
off.
I see all these connectors/adapters as a separate microservices, each
dealing with separate protocol, picking event from distro core that
was published for that proto.

BR,
Drasko DRASKOVIC
Mainflux Author and Technical Advisor

www.mainflux.com | Industrial IoT Cloud
-------------------------------------------------------------------
Engineering Division | Paris, France

LinkedIn: https://www.linkedin.com/in/draskodraskovic
Twitter: @draskodraskovic

Fede Claramonte
 

Hi Drasko


On 31/10/17 11:06, Drasko DRASKOVIC wrote:
Hi Fede,

On Tue, Oct 31, 2017 at 10:48 AM, Fede Claramonte
<fclaramonte@...> wrote:
Jim, Drasko

Client already have an api(/api/v1/registration) that can be used to obtain
all registrations. It should be easy to use it instead of mongo. If we keep
them as different microservices distro can obtain all registrations from
client when it is notified of changes or starting.
Does distro need **all** registrations that exist in the client DB at
start, or it can demand one by one when it needs it and then put it in
the cache? I.e. why would you pick all that is in client database,
when you want to use credentials for the stream/user X only when this
message comes from the core. Then you will demand a client to read for
you credentials via client's API and store it in your local cache. You
will at the same time subscribe to notifications coming from the
client for this particular stream (not other), and see if it's
credentials change.

Does this sound reasonable?
As I understand the design distro just need to send the events it receives from core-data to each registration. A registration is just the information about how (encoding, compression, crypto) and where (protocol and url) each event should be sent. So yes, distro needs to have the complete registration list to work.

But I will suggest to merge both microservices in the go implementation. The
api and the behavior will be same as before and we don't need to make any
change in design. It allows us to complete the export port without modifying
the java implementation. If we want to have something similar to only have
running distro we can add a command line flag to disable the api rest and
read the registrations list from file.
I am not against this, I am just trying to explore options we have
today (actually I am trying to better understand current architecture
myself). Jim mentioned that only client could be potentially replaced,
and it makes sense that in that case we just do not start REST API of
the client in the monolithic binary, but it would be even better if we
do not even have to include the code.
With the layout you setup for the go code it could be easy to have different commands: distro configured with file, distro + client, distro and client as different microservices. But I'm not sure what is the drawback of having an only command configurable at runtime(command line flags), executable size?

And when the new messaging/pubsub mechanism is chosen we can think again
about the best path to implement it.

About the plugins, I think they are a good option but I have mixed feelings
about it. Don't know if it fits better in a microservice architecture to
have a different microservices for each connector. In my opinion the
compromise of having the more generic connectors in distro (as it is today)
and more custom connectors in it's own microservice could be a good trade
off.
I see all these connectors/adapters as a separate microservices, each
dealing with separate protocol, picking event from distro core that
was published for that proto.
What are the benefits of getting them from distro instead of core-data directly?

Fede

Drasko DRASKOVIC
 

Hi Fede,

On Tue, Oct 31, 2017 at 11:34 AM, Fede Claramonte
<fclaramonte@...> wrote:
Hi Drasko


On 31/10/17 11:06, Drasko DRASKOVIC wrote:

Hi Fede,

On Tue, Oct 31, 2017 at 10:48 AM, Fede Claramonte
<fclaramonte@...> wrote:

Jim, Drasko

Client already have an api(/api/v1/registration) that can be used to
obtain
all registrations. It should be easy to use it instead of mongo. If we
keep
them as different microservices distro can obtain all registrations from
client when it is notified of changes or starting.
Does distro need **all** registrations that exist in the client DB at
start, or it can demand one by one when it needs it and then put it in
the cache? I.e. why would you pick all that is in client database,
when you want to use credentials for the stream/user X only when this
message comes from the core. Then you will demand a client to read for
you credentials via client's API and store it in your local cache. You
will at the same time subscribe to notifications coming from the
client for this particular stream (not other), and see if it's
credentials change.

Does this sound reasonable?
As I understand the design distro just need to send the events it receives
from core-data to each registration. A registration is just the information
about how (encoding, compression, crypto) and where (protocol and url) each
event should be sent. So yes, distro needs to have the complete registration
list to work.
Distro gets events from the core, but then it can analyze the event
and understand that this is event X that needs credentials for X and
demand **only this set of credentials** from client, not the whole
database (where you have credentials for X, Y, Z, etc). Then it cache
these credentials for subsequent events that come from core and are
destined to X.


But I will suggest to merge both microservices in the go implementation.
The
api and the behavior will be same as before and we don't need to make any
change in design. It allows us to complete the export port without
modifying
the java implementation. If we want to have something similar to only
have
running distro we can add a command line flag to disable the api rest and
read the registrations list from file.
I am not against this, I am just trying to explore options we have
today (actually I am trying to better understand current architecture
myself). Jim mentioned that only client could be potentially replaced,
and it makes sense that in that case we just do not start REST API of
the client in the monolithic binary, but it would be even better if we
do not even have to include the code.
With the layout you setup for the go code it could be easy to have different
commands: distro configured with file, distro + client, distro and client as
different microservices. But I'm not sure what is the drawback of having an
only command configurable at runtime(command line flags), executable size?
This is also my concern, plus responsibility division (logical separation).


And when the new messaging/pubsub mechanism is chosen we can think again
about the best path to implement it.

About the plugins, I think they are a good option but I have mixed
feelings
about it. Don't know if it fits better in a microservice architecture to
have a different microservices for each connector. In my opinion the
compromise of having the more generic connectors in distro (as it is
today)
and more custom connectors in it's own microservice could be a good trade
off.
I see all these connectors/adapters as a separate microservices, each
dealing with separate protocol, picking event from distro core that
was published for that proto.
What are the benefits of getting them from distro instead of core-data
directly?
Distro gets events from core and knows how to analyze them. Protocol
adapters are dumb message senders - they are stateless, do not care
about other EdgeX functionalities - they know only how to forward
message coming to them form the distro to appropriate protocol.
Responsibility division and simplification of the system.

If you have 10 different adapters, I think one centralized pub/sub
service would be simpler than opening socket (gRPC, ZMQ, ...) from
distro towards each adapter.

I saw mails from Brian from NATS project (I know him because we use
NATS at Mainflux). This is one potential path to explore, although I
would like that we stay technology agnostic as much as possible
(pub/sub broker is general concept, then if we use NATS, RabbitMQ, or
something else it does not matter. It happens just that NATS is
written in Go and is extremely simple and powerful).

Best regards,
Drasko DRASKOVIC
Mainflux Author and Technical Advisor

www.mainflux.com | Industrial IoT Cloud
-------------------------------------------------------------------
Engineering Division | Paris, France

LinkedIn: https://www.linkedin.com/in/draskodraskovic
Twitter: @draskodraskovic

Drasko DRASKOVIC
 

Comments appreciated: https://github.com/drasko/edgex-export/pull/36

BR,
Drasko DRASKOVIC
Mainflux Author and Technical Advisor

www.mainflux.com | Industrial IoT Cloud
-------------------------------------------------------------------
Engineering Division | Paris, France

LinkedIn: https://www.linkedin.com/in/draskodraskovic
Twitter: @draskodraskovic

On Tue, Oct 31, 2017 at 11:48 AM, Drasko DRASKOVIC <drasko@...> wrote:
Hi Fede,

On Tue, Oct 31, 2017 at 11:34 AM, Fede Claramonte
<fclaramonte@...> wrote:
Hi Drasko


On 31/10/17 11:06, Drasko DRASKOVIC wrote:

Hi Fede,

On Tue, Oct 31, 2017 at 10:48 AM, Fede Claramonte
<fclaramonte@...> wrote:

Jim, Drasko

Client already have an api(/api/v1/registration) that can be used to
obtain
all registrations. It should be easy to use it instead of mongo. If we
keep
them as different microservices distro can obtain all registrations from
client when it is notified of changes or starting.
Does distro need **all** registrations that exist in the client DB at
start, or it can demand one by one when it needs it and then put it in
the cache? I.e. why would you pick all that is in client database,
when you want to use credentials for the stream/user X only when this
message comes from the core. Then you will demand a client to read for
you credentials via client's API and store it in your local cache. You
will at the same time subscribe to notifications coming from the
client for this particular stream (not other), and see if it's
credentials change.

Does this sound reasonable?
As I understand the design distro just need to send the events it receives
from core-data to each registration. A registration is just the information
about how (encoding, compression, crypto) and where (protocol and url) each
event should be sent. So yes, distro needs to have the complete registration
list to work.
Distro gets events from the core, but then it can analyze the event
and understand that this is event X that needs credentials for X and
demand **only this set of credentials** from client, not the whole
database (where you have credentials for X, Y, Z, etc). Then it cache
these credentials for subsequent events that come from core and are
destined to X.


But I will suggest to merge both microservices in the go implementation.
The
api and the behavior will be same as before and we don't need to make any
change in design. It allows us to complete the export port without
modifying
the java implementation. If we want to have something similar to only
have
running distro we can add a command line flag to disable the api rest and
read the registrations list from file.
I am not against this, I am just trying to explore options we have
today (actually I am trying to better understand current architecture
myself). Jim mentioned that only client could be potentially replaced,
and it makes sense that in that case we just do not start REST API of
the client in the monolithic binary, but it would be even better if we
do not even have to include the code.
With the layout you setup for the go code it could be easy to have different
commands: distro configured with file, distro + client, distro and client as
different microservices. But I'm not sure what is the drawback of having an
only command configurable at runtime(command line flags), executable size?
This is also my concern, plus responsibility division (logical separation).


And when the new messaging/pubsub mechanism is chosen we can think again
about the best path to implement it.

About the plugins, I think they are a good option but I have mixed
feelings
about it. Don't know if it fits better in a microservice architecture to
have a different microservices for each connector. In my opinion the
compromise of having the more generic connectors in distro (as it is
today)
and more custom connectors in it's own microservice could be a good trade
off.
I see all these connectors/adapters as a separate microservices, each
dealing with separate protocol, picking event from distro core that
was published for that proto.
What are the benefits of getting them from distro instead of core-data
directly?
Distro gets events from core and knows how to analyze them. Protocol
adapters are dumb message senders - they are stateless, do not care
about other EdgeX functionalities - they know only how to forward
message coming to them form the distro to appropriate protocol.
Responsibility division and simplification of the system.

If you have 10 different adapters, I think one centralized pub/sub
service would be simpler than opening socket (gRPC, ZMQ, ...) from
distro towards each adapter.

I saw mails from Brian from NATS project (I know him because we use
NATS at Mainflux). This is one potential path to explore, although I
would like that we stay technology agnostic as much as possible
(pub/sub broker is general concept, then if we use NATS, RabbitMQ, or
something else it does not matter. It happens just that NATS is
written in Go and is extremely simple and powerful).

Best regards,
Drasko DRASKOVIC
Mainflux Author and Technical Advisor

www.mainflux.com | Industrial IoT Cloud
-------------------------------------------------------------------
Engineering Division | Paris, France

LinkedIn: https://www.linkedin.com/in/draskodraskovic
Twitter: @draskodraskovic

Drasko DRASKOVIC
 

Hi Jim,

On Mon, Oct 30, 2017 at 3:58 PM, <James.White2@...> wrote:
Hi Fede,
as always, you have identified some short comings which are on target. Drasko has noted some of the same issues.
Yes - we did not get a chance to develop an export data client library so that the export services could have some isolation with regard to database - particularly export distro. It would be good not to have distro have to have Mongo (or other DB) references directly.

Yes - there needs to be callbacks in place to allow distro to be notified of changes. That would allow distro to contain a "cache" of client registrations and not have to read from the DB each time. TO DO notes in the old Fuse code will outline this exact need.
Coming back to this, can you please tell us how this notification
implementation was planned in the FUSE? I'm asking this because we
have a bit of a discussion regarding this issue:
https://github.com/drasko/edgex-export/issues/34, and I would
appreciate your view on the architecture.

In short, Fede proposes empty notifications that just trigger database
reload in the distro cache (and it sounds reasonable not to duplicate
client functionality). I proposed (and Mainflux team agrees
internally) that update message should be sent in notification
(carrying the type of update that happened), and then distro should
consume this message to control the cache.

Best regards,
Drasko DRASKOVIC
Mainflux Author and Technical Advisor

www.mainflux.com | Industrial IoT Cloud
-------------------------------------------------------------------
Engineering Division | Paris, France

LinkedIn: https://www.linkedin.com/in/draskodraskovic
Twitter: @draskodraskovic

James.White2@...
 

I can provide you how we did it for other parts of Fuse (now EdgeX) and then let you guys either adopt or pick an alternative path as you see fit...

In Metadata, for example, we use the notification service. When a device changes, the notification service calls on the device service's pre-registered call back address. In the callback, we send the identifier of which device changed along with an "Action" enum which is either POST (new device), PUT (updated device) or DELETE (removed device) indicator. The device service can then act on the change as it desires based on the call.

Using the notification service allows the two services not to have to know anything about each other. It also allows for the "callback" to be a REST call today but could be a message call tomorrow. We used just enough data in the notification to allow the receiving device service to know what happened on which device, but allowed the device service to pick its method of handling. Easy way would be just to empty its whole knowledge base (a la Fede approach). More intelligent device services would just deal with the specific device in the knowledge base.

Perhaps that is the approach you can use. Allow client app to send the notification with the details of the precise client change/update, but allow the applications (in this case distro, but it could be others in the future) to determine what to do with the info - and start with a simple cache empty/reload as a first implementation, but improve on that if more time/resources are available to improve it.

Jim
________________________________________
From: Drasko DRASKOVIC <drasko@...>
Sent: Thursday, November 9, 2017 4:14 PM
To: White2, James; edgex-golang@...; Janko Isidorovic; dejan.mjc; Nikola Marcetic
Cc: Fede; edgex-devel@...
Subject: Re: [Edgex-devel] export microservices design

Hi Jim,

On Mon, Oct 30, 2017 at 3:58 PM, <James.White2@...> wrote:
Hi Fede,
as always, you have identified some short comings which are on target. Drasko has noted some of the same issues.
Yes - we did not get a chance to develop an export data client library so that the export services could have some isolation with regard to database - particularly export distro. It would be good not to have distro have to have Mongo (or other DB) references directly.

Yes - there needs to be callbacks in place to allow distro to be notified of changes. That would allow distro to contain a "cache" of client registrations and not have to read from the DB each time. TO DO notes in the old Fuse code will outline this exact need.
Coming back to this, can you please tell us how this notification
implementation was planned in the FUSE? I'm asking this because we
have a bit of a discussion regarding this issue:
https://github.com/drasko/edgex-export/issues/34, and I would
appreciate your view on the architecture.

In short, Fede proposes empty notifications that just trigger database
reload in the distro cache (and it sounds reasonable not to duplicate
client functionality). I proposed (and Mainflux team agrees
internally) that update message should be sent in notification
(carrying the type of update that happened), and then distro should
consume this message to control the cache.

Best regards,
Drasko DRASKOVIC
Mainflux Author and Technical Advisor

www.mainflux.com | Industrial IoT Cloud
-------------------------------------------------------------------
Engineering Division | Paris, France

LinkedIn: https://www.linkedin.com/in/draskodraskovic
Twitter: @draskodraskovic

Drasko DRASKOVIC
 

Hi Jim,

On Thu, Nov 9, 2017 at 11:34 PM, <James.White2@...> wrote:
I can provide you how we did it for other parts of Fuse (now EdgeX) and then let you guys either adopt or pick an alternative path as you see fit...

In Metadata, for example, we use the notification service. When a device changes, the notification service calls on the device service's pre-registered call back address. In the callback, we send the identifier of which device changed along with an "Action" enum which is either POST (new device), PUT (updated device) or DELETE (removed device) indicator. The device service can then act on the change as it desires based on the call.

Using the notification service allows the two services not to have to know anything about each other. It also allows for the "callback" to be a REST call today but could be a message call tomorrow. We used just enough data in the notification to allow the receiving device service to know what happened on which device, but allowed the device service to pick its method of handling. Easy way would be just to empty its whole knowledge base (a la Fede approach). More intelligent device services would just deal with the specific device in the knowledge base.

Perhaps that is the approach you can use. Allow client app to send the notification with the details of the precise client change/update, but allow the applications (in this case distro, but it could be others in the future) to determine what to do with the info - and start with a simple cache empty/reload as a first implementation, but improve on that if more time/resources are available to improve it.
Thanks a lot. I think this resembles a lot to what I proposed - over
one API endpoint multiplex the notification type (CRUD) into the
message. When distro gets the message it parses it to understand a)
what type of operation has been done and b) to pick up update data if
the operation was CREATE or UPDATE (if it was DELETE, no data was
passed, just command type). Then distro updates it's cache based on
this.

Basically, distro just have to do a `switch` on Action type, as you
call it, and if the Action type is CREATE or UPDATE then pick up
message data and create/replace it's cache entry, and if it is DELETE,
then evict this cache entry.

Best regards,
Drasko DRASKOVIC
Mainflux Author and Technical Advisor

www.mainflux.com | Industrial IoT Cloud
-------------------------------------------------------------------
Engineering Division | Paris, France

LinkedIn: https://www.linkedin.com/in/draskodraskovic
Twitter: @draskodraskovic