Date   
Re: EdgeX: Systems Management Working Group Chair Nominations

Akram Ahmad
 

+1 for Michael Estrin, Dell

On Sun, Nov 10, 2019 at 5:52 PM Brett Preston <bpreston@...> wrote:
Members of the EdgeX Community,

As noted at the Phoenix F2F meeting, the EdgeX System Management Working Group has opened up nominations for Working Group Chair. 

Current Nominations are:
  • Michael Estrin, Dell
Should anyone else wish to nominate themselves/another contributor, please provide nomination by 5pm PDT on Sunday, November 17. If no further nominations are risen, then Michael Estrin will be named Systems Management Working Group Chair.

Thank you,


Brett

--
Brett Preston
Sr. Program Manager
The Linux Foundation
+1 (971) 303-9030

EdgeX: Systems Management Working Group Chair Nominations

Brett Preston
 

Members of the EdgeX Community,

As noted at the Phoenix F2F meeting, the EdgeX System Management Working Group has opened up nominations for Working Group Chair. 

Current Nominations are:
  • Michael Estrin, Dell
Should anyone else wish to nominate themselves/another contributor, please provide nomination by 5pm PDT on Sunday, November 17. If no further nominations are risen, then Michael Estrin will be named Systems Management Working Group Chair.

Thank you,


Brett

--
Brett Preston
Sr. Program Manager
The Linux Foundation
+1 (971) 303-9030

EdgeX F2F Training Day - feedback survey

Jim White
 

All,
if you attended part of the F2F training in Phoenix or via Zoom today, would you please take 30 seconds to provide us feedback so that we can work to improve the next event.


Thanks,
Jim White
CTO, IOTech
EdgeX Foundry co-founder, TSC Vice Chairman & System Management Working Group Chair
On EdgeX Slack @ jpwhite
612-916-6693

Face-to-face training

Jim White
 

Good morning everyone.  Training day of our semi-annual face-to-face TSC meeting happens today and it features the following classes (all times are in MST):

9am - EdgeX Introductory Tutorial (Alex Courouble - VMWare)
10:00am – Security Services Intro (Malini, Tingyu, Byron, Jim Wang)
11:00pm – An introduction to Redis (Andre Srinivasan - Redis)
12noon - lunch
1:00pm – CLI Tutorial (Alex – VMWare)
1:45pm – Dependency Injection Intro (Michael Estrin – Dell)
2:30pm – Jenkins Pipelines (Lisa/James – Intel)  
3:15pm – Using the SDK and App Configurable lab (Johanson & Goodell - Intel)

For those not here in Phoenix, you can still attend this training via Zoom.  Here are the Zoom details:
Join Zoom Meeting
https://zoom.us/j/814048924

Meeting ID: 814 048 924

One tap mobile
+16699006833,,814048924# US (San Jose)
+16465588656,,814048924# US (New York)

Dial by your location
        +1 669 900 6833 US (San Jose)
        +1 646 558 8656 US (New York)
        855 880 1246 US Toll-free
        877 369 0926 US Toll-free

We will record the training sessions and post those to the Wiki afterward.

Look forward to having you join us today.

Regards
Jim White
CTO, IOTech
EdgeX Foundry co-founder, TSC Vice Chairman & System Management Working Group Chair
On EdgeX Slack @ jpwhite
612-916-6693

Re: [Edgex-tsc] Fuji Code Freeze in effect

Goodell, Leonard <leonard.goodell@...>
 

Yes, I have update the issue to be a bug.

 

From: Jim White <jim@...>
Sent: Thursday, October 24, 2019 9:05 AM
To: Goodell, Leonard <leonard.goodell@...>
Cc: 🐙 TSC <edgex-tsc@...>; edgex-devel@...; Rashidi-ranjbar, Lisa A <lisa.a.rashidi-ranjbar@...>
Subject: Re: [Edgex-tsc] Fuji Code Freeze in effect

 

bug fixes fine.  New code not good.  I would assume this is really more of a bug fix.

j

 

 

On Thu, 24 Oct 2019 at 10:48, Goodell, Leonard <leonard.goodell@...> wrote:

Hi Jim,

   I just submitted this PR, which is configuration only, no code changes.

 

https://github.com/edgexfoundry/edgex-go/pull/1992

 

-Lenny

 

From: EdgeX-TSC@... <EdgeX-TSC@...> On Behalf Of Jim White
Sent: Thursday, October 24, 2019 5:20 AM
To: 🐙 TSC <edgex-tsc@...>; edgex-devel@...
Cc: Rashidi-ranjbar, Lisa A <lisa.a.rashidi-ranjbar@...>
Subject: [Edgex-tsc] Fuji Code Freeze in effect

 

All - code freeze is now in effect for Fuji.  Unless you are working at the direction of Michael Estrin, Cloud Tsia or Tingyu Zeng to solve the current blackbox issues, you should not be submitting any additional PRs.Because we are fighting a blackbox test issue, we can't cut the branch yet.  So for those looking to make code changes for Geneva on master, please hang on and work on your own local branches or forks.  We are hoping to solve the issue and cut the branch by the weekend.

 

Jim White

CTO, IOTech

EdgeX Foundry co-founder, TSC Vice Chairman & System Management Working Group Chair

On EdgeX Slack @ jpwhite

612-916-6693

Re: [Edgex-tsc] Fuji Code Freeze in effect

Jim White
 

bug fixes fine.  New code not good.  I would assume this is really more of a bug fix.
j


On Thu, 24 Oct 2019 at 10:48, Goodell, Leonard <leonard.goodell@...> wrote:

Hi Jim,

   I just submitted this PR, which is configuration only, no code changes.

 

https://github.com/edgexfoundry/edgex-go/pull/1992

 

-Lenny

 

From: EdgeX-TSC@... <EdgeX-TSC@...> On Behalf Of Jim White
Sent: Thursday, October 24, 2019 5:20 AM
To: 🐙 TSC <edgex-tsc@...>; edgex-devel@...
Cc: Rashidi-ranjbar, Lisa A <lisa.a.rashidi-ranjbar@...>
Subject: [Edgex-tsc] Fuji Code Freeze in effect

 

All - code freeze is now in effect for Fuji.  Unless you are working at the direction of Michael Estrin, Cloud Tsia or Tingyu Zeng to solve the current blackbox issues, you should not be submitting any additional PRs.Because we are fighting a blackbox test issue, we can't cut the branch yet.  So for those looking to make code changes for Geneva on master, please hang on and work on your own local branches or forks.  We are hoping to solve the issue and cut the branch by the weekend.

 

Jim White

CTO, IOTech

EdgeX Foundry co-founder, TSC Vice Chairman & System Management Working Group Chair

On EdgeX Slack @ jpwhite

612-916-6693

Re: [Edgex-tsc] Fuji Code Freeze in effect

Jim White
 

Code freeze means no new work - bug fixes ok.
j


On Thu, 24 Oct 2019 at 10:22, Cloud Tsai <cloud@...> wrote:
Sorry all, I found out a serious bug I made last week in device-sdk-go.
I made a fix.  Please the members from Device Service WG help review and include it into this release.

On Thu, 24 Oct 2019 at 20:20, Jim White <jim@...> wrote:
All - code freeze is now in effect for Fuji.  Unless you are working at the direction of Michael Estrin, Cloud Tsia or Tingyu Zeng to solve the current blackbox issues, you should not be submitting any additional PRs.Because we are fighting a blackbox test issue, we can't cut the branch yet.  So for those looking to make code changes for Geneva on master, please hang on and work on your own local branches or forks.  We are hoping to solve the issue and cut the branch by the weekend.

Jim White
CTO, IOTech
EdgeX Foundry co-founder, TSC Vice Chairman & System Management Working Group Chair
On EdgeX Slack @ jpwhite
612-916-6693



--
Best Regards,
Cloud Tsai

Re: [Edgex-tsc] Fuji Code Freeze in effect

Goodell, Leonard <leonard.goodell@...>
 

Hi Jim,

   I just submitted this PR, which is configuration only, no code changes.

 

https://github.com/edgexfoundry/edgex-go/pull/1992

 

-Lenny

 

From: EdgeX-TSC@... <EdgeX-TSC@...> On Behalf Of Jim White
Sent: Thursday, October 24, 2019 5:20 AM
To: 🐙 TSC <edgex-tsc@...>; edgex-devel@...
Cc: Rashidi-ranjbar, Lisa A <lisa.a.rashidi-ranjbar@...>
Subject: [Edgex-tsc] Fuji Code Freeze in effect

 

All - code freeze is now in effect for Fuji.  Unless you are working at the direction of Michael Estrin, Cloud Tsia or Tingyu Zeng to solve the current blackbox issues, you should not be submitting any additional PRs.Because we are fighting a blackbox test issue, we can't cut the branch yet.  So for those looking to make code changes for Geneva on master, please hang on and work on your own local branches or forks.  We are hoping to solve the issue and cut the branch by the weekend.

 

Jim White

CTO, IOTech

EdgeX Foundry co-founder, TSC Vice Chairman & System Management Working Group Chair

On EdgeX Slack @ jpwhite

612-916-6693

Re: [Edgex-tsc] Fuji Code Freeze in effect

Cloud Tsai <cloud@...>
 

Sorry all, I found out a serious bug I made last week in device-sdk-go.
I made a fix.  Please the members from Device Service WG help review and include it into this release.


On Thu, 24 Oct 2019 at 20:20, Jim White <jim@...> wrote:
All - code freeze is now in effect for Fuji.  Unless you are working at the direction of Michael Estrin, Cloud Tsia or Tingyu Zeng to solve the current blackbox issues, you should not be submitting any additional PRs.Because we are fighting a blackbox test issue, we can't cut the branch yet.  So for those looking to make code changes for Geneva on master, please hang on and work on your own local branches or forks.  We are hoping to solve the issue and cut the branch by the weekend.

Jim White
CTO, IOTech
EdgeX Foundry co-founder, TSC Vice Chairman & System Management Working Group Chair
On EdgeX Slack @ jpwhite
612-916-6693



--
Best Regards,
Cloud Tsai

Fuji Code Freeze in effect

Jim White
 

All - code freeze is now in effect for Fuji.  Unless you are working at the direction of Michael Estrin, Cloud Tsia or Tingyu Zeng to solve the current blackbox issues, you should not be submitting any additional PRs.Because we are fighting a blackbox test issue, we can't cut the branch yet.  So for those looking to make code changes for Geneva on master, please hang on and work on your own local branches or forks.  We are hoping to solve the issue and cut the branch by the weekend.

Jim White
CTO, IOTech
EdgeX Foundry co-founder, TSC Vice Chairman & System Management Working Group Chair
On EdgeX Slack @ jpwhite
612-916-6693

Re: Problem in running Edgex services

espy
 

On 6/25/19 5:58 AM, Sezal Chug wrote:

Hi,
so i was trying to run the latest version of edgex on Kubernetes. But i could run only the consul, volume and config see only . when i run the logging deployment it shows a problem.

Did you use kompose to convert the docker-compose files to k8s? I ran into exactly the same volume errors when I tried using kompose a few months back.

Regards,
/tony



This is the description for my pod
Name:               logging-78c5748598-q87qw
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               admin1/192.168.1.9
Start Time:         Tue, 25 Jun 2019 15:23:46 +0530
Labels:             io.kompose.service=logging
                    pod-template-hash=78c5748598
Annotations:        <none>
Status:             Running
IP:                 10.42.0.35
Controlled By:      ReplicaSet/logging-78c5748598
Containers:
  edgex-support-logging:
    Container ID:  containerd://af5f410195f01b2744193c24449e22334a140cb4da356f2efefd7c0e0f5a0a69
    Image:         nexus3.edgexfoundry.org:10002/docker-support-logging-go:0.7.1
    Image ID:      docker.io/edgexfoundry/docker-support-logging-go@sha256:f0b2d67554547e2dd3282f4878b57a79923cf9b5ee730e19dd5a4b42946b2ee6
    Port:          48061/TCP
    Host Port:     0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 25 Jun 2019 15:23:50 +0530
      Finished:     Tue, 25 Jun 2019 15:23:50 +0530
    Ready:          False
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /consul/config from consul-config (rw)
      /consul/data from consul-data (rw)
      /data/db from db-data (rw)
      /edgex/logs from log-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hrf5p (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  db-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  db-data
    ReadOnly:   false
  log-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  log-data
    ReadOnly:   false
  consul-config:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  consul-config
    ReadOnly:   false
  consul-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  consul-data
    ReadOnly:   false
  default-token-hrf5p:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-hrf5p
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  18s                default-scheduler  Successfully assigned default/logging-78c5748598-q87qw to admin1
  Normal   Pulled     14s (x2 over 16s)  kubelet, admin1    Container image "nexus3.edgexfoundry.org:10002/docker-support-logging-go:0.7.1" already present on machine
  Normal   Created    13s (x2 over 15s)  kubelet, admin1    Created container edgex-support-logging
  Normal   Started    13s (x2 over 15s)  kubelet, admin1    Started container edgex-support-logging
  Warning  BackOff    11s (x2 over 12s)  kubelet, admin1    Back-off restarting failed container

please check. why is it exiting exit code 1.
the logs are below.
ERROR: 2019/06/25 09:35:02 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:03 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:04 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:05 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:06 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:07 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:08 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:09 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:10 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:11 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:12 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:13 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:14 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:15 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:16 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:17 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:18 edgex-support-logging: Service bootstrap failed!

Re: Problem in running Edgex services

James.White2@...
 

Sezal,

It appears that your services are unable to get to Consul.  I am not sure what / how you have set up your pods, etc.  If you message me privately, I can put you in touch with a member of my team that has some of EdgeX running in Kubernetes.

 

 

Jim White

Director, IoT Platform Development Team & Distinguished Engineer

EdgeX Foundry Technical Steering Committee Vice Chairman

Dell Technologies | IoT Solutions Division

Office +1 512-723-6139, mobile/text +1 612-916-6693

james_white2@...

 

 

 

From: EdgeX-Devel@... <EdgeX-Devel@...> On Behalf Of Sezal Chug
Sent: Tuesday, June 25, 2019 4:59 AM
To: EdgeX-Devel@...
Subject: [Edgex-devel] Problem in running Edgex services

 

[EXTERNAL EMAIL]

Hi,

so i was trying to run the latest version of edgex on Kubernetes. But i could run only the consul, volume and config see only . when i run the logging deployment it shows a problem.

This is the description for my pod

Name:               logging-78c5748598-q87qw
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               admin1/192.168.1.9
Start Time:         Tue, 25 Jun 2019 15:23:46 +0530
Labels:             io.kompose.service=logging
                    pod-template-hash=78c5748598
Annotations:        <none>
Status:             Running
IP:                 10.42.0.35
Controlled By:      ReplicaSet/logging-78c5748598
Containers:
  edgex-support-logging:
    Container ID:  containerd://af5f410195f01b2744193c24449e22334a140cb4da356f2efefd7c0e0f5a0a69
    Image:         nexus3.edgexfoundry.org:10002/docker-support-logging-go:0.7.1
    Image ID:      docker.io/edgexfoundry/docker-support-logging-go@sha256:f0b2d67554547e2dd3282f4878b57a79923cf9b5ee730e19dd5a4b42946b2ee6
    Port:          48061/TCP
    Host Port:     0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 25 Jun 2019 15:23:50 +0530
      Finished:     Tue, 25 Jun 2019 15:23:50 +0530
    Ready:          False
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /consul/config from consul-config (rw)
      /consul/data from consul-data (rw)
      /data/db from db-data (rw)
      /edgex/logs from log-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hrf5p (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  db-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  db-data
    ReadOnly:   false
  log-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  log-data
    ReadOnly:   false
  consul-config:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  consul-config
    ReadOnly:   false
  consul-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  consul-data
    ReadOnly:   false
  default-token-hrf5p:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-hrf5p
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  18s                default-scheduler  Successfully assigned default/logging-78c5748598-q87qw to admin1
  Normal   Pulled     14s (x2 over 16s)  kubelet, admin1    Container image "nexus3.edgexfoundry.org:10002/docker-support-logging-go:0.7.1" already present on machine
  Normal   Created    13s (x2 over 15s)  kubelet, admin1    Created container edgex-support-logging
  Normal   Started    13s (x2 over 15s)  kubelet, admin1    Started container edgex-support-logging
  Warning  BackOff    11s (x2 over 12s)  kubelet, admin1    Back-off restarting failed container

 

please check. why is it exiting exit code 1.

the logs are below.

ERROR: 2019/06/25 09:35:02 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:03 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:04 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:05 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:06 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:07 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:08 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:09 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:10 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:11 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:12 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:13 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:14 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:15 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:16 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:17 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:18 edgex-support-logging: Service bootstrap failed!

 

Problem in running Edgex services

Sezal Chug <sezal17101@...>
 

Hi,
so i was trying to run the latest version of edgex on Kubernetes. But i could run only the consul, volume and config see only . when i run the logging deployment it shows a problem.
This is the description for my pod
Name:               logging-78c5748598-q87qw
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               admin1/192.168.1.9
Start Time:         Tue, 25 Jun 2019 15:23:46 +0530
Labels:             io.kompose.service=logging
                    pod-template-hash=78c5748598
Annotations:        <none>
Status:             Running
IP:                 10.42.0.35
Controlled By:      ReplicaSet/logging-78c5748598
Containers:
  edgex-support-logging:
    Container ID:  containerd://af5f410195f01b2744193c24449e22334a140cb4da356f2efefd7c0e0f5a0a69
    Image:         nexus3.edgexfoundry.org:10002/docker-support-logging-go:0.7.1
    Image ID:      docker.io/edgexfoundry/docker-support-logging-go@sha256:f0b2d67554547e2dd3282f4878b57a79923cf9b5ee730e19dd5a4b42946b2ee6
    Port:          48061/TCP
    Host Port:     0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 25 Jun 2019 15:23:50 +0530
      Finished:     Tue, 25 Jun 2019 15:23:50 +0530
    Ready:          False
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /consul/config from consul-config (rw)
      /consul/data from consul-data (rw)
      /data/db from db-data (rw)
      /edgex/logs from log-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hrf5p (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  db-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  db-data
    ReadOnly:   false
  log-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  log-data
    ReadOnly:   false
  consul-config:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  consul-config
    ReadOnly:   false
  consul-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  consul-data
    ReadOnly:   false
  default-token-hrf5p:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-hrf5p
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  18s                default-scheduler  Successfully assigned default/logging-78c5748598-q87qw to admin1
  Normal   Pulled     14s (x2 over 16s)  kubelet, admin1    Container image "nexus3.edgexfoundry.org:10002/docker-support-logging-go:0.7.1" already present on machine
  Normal   Created    13s (x2 over 15s)  kubelet, admin1    Created container edgex-support-logging
  Normal   Started    13s (x2 over 15s)  kubelet, admin1    Started container edgex-support-logging
  Warning  BackOff    11s (x2 over 12s)  kubelet, admin1    Back-off restarting failed container

please check. why is it exiting exit code 1.
the logs are below.
ERROR: 2019/06/25 09:35:02 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:03 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:04 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:05 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:06 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:07 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:08 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:09 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:10 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:11 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:12 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:13 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:14 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:15 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:16 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:17 connection to Consul could not be made: Put http://edgex-core-consul:8500/v1/agent/service/register: dial tcp: lookup edgex-core-consul on 10.43.0.10:53: no such host
ERROR: 2019/06/25 09:35:18 edgex-support-logging: Service bootstrap failed!

Re: edgexfoundary installation

espy
 

On 6/4/19 8:10 AM, James.White2@... wrote:

Venkat – as one alternate, EdgeX also provides Ubuntu snaps.  This is provided through the Snap store.

One minor correction, snaps are supported on any Linux distro with snapd support. So technically they're not "Ubuntu" snaps. The edgexfoundry snap has in fact been installed on Debian, Kali, and Zorin among others distros.

For more information on how to install and use the snap of EdgeX see:

https://github.com/edgexfoundry/edgex-go/blob/master/snap/README.md

If you're running any recent version of Ubuntu, installing the snap is a one-line command:

$ sudo snap install edgexfoundry

This will install from the latest/stable channel which is the Delhi release of EdgeX. If you want to experiment with the current development release "Edinburgh", you just need indicate that you want to install from the edinburgh/edge channel where daily builds are published:

$ sudo snap install edgexfoundry --channel=edinburgh/edge

Please let me know if you have any questions.

Regards,
/tony

 

Others in our community (for example Mainflux) have used other means or simple scripting to install, but that is not something we maintain as part of the community artifacts.  You might want to solicit for other alternatives through our Slack channels (edgexfoundry.slack.com).

 

Jim

 

From: EdgeX-Devel@... <EdgeX-Devel@...> On Behalf Of va00510223@...
Sent: Monday, June 3, 2019 11:53 PM
To: EdgeX-Devel@...
Subject: [Edgex-devel] edgexfoundary installation

 

[EXTERNAL EMAIL]

Hi,
Do we have any way of installing edgexfoundary without docker?? 
Please suggest.

Thanks
Venkat

Re: device-sdk in Python

James.White2@...
 

Alex,

There is no immediate plan to provide a Python DS SDK, but if there is a contribution, we would welcome the addition of alternate language SDKs (either DS or application functions SDK).

 

Jim White

Director, IoT Platform Development Team & Distinguished Engineer

EdgeX Foundry Technical Steering Committee Vice Chairman

Dell Technologies | IoT Solutions Division

Office +1 512-723-6139, mobile/text +1 612-916-6693

james_white2@...

 

 

 

From: EdgeX-Devel@... <EdgeX-Devel@...> On Behalf Of Alex Gonzalez
Sent: Wednesday, June 5, 2019 7:20 AM
To: EdgeX-Devel@...
Subject: [Edgex-devel] device-sdk in Python

 

[EXTERNAL EMAIL]

Hi,

Just wondering whether there is any ongoing effort for a Python device SDK.

Regards,
Alex

device-sdk in Python

Alex Gonzalez
 

Hi,

Just wondering whether there is any ongoing effort for a Python device SDK.

Regards,
Alex

Re: edgexfoundary installation

James.White2@...
 

Venkat – as one alternate, EdgeX also provides Ubuntu snaps.  This is provided through the Snap store.

 

Others in our community (for example Mainflux) have used other means or simple scripting to install, but that is not something we maintain as part of the community artifacts.  You might want to solicit for other alternatives through our Slack channels (edgexfoundry.slack.com).

 

Jim

 

From: EdgeX-Devel@... <EdgeX-Devel@...> On Behalf Of va00510223@...
Sent: Monday, June 3, 2019 11:53 PM
To: EdgeX-Devel@...
Subject: [Edgex-devel] edgexfoundary installation

 

[EXTERNAL EMAIL]

Hi,
Do we have any way of installing edgexfoundary without docker?? 
Please suggest.

Thanks
Venkat

edgexfoundary installation

va00510223@...
 

Hi,
Do we have any way of installing edgexfoundary without docker?? 
Please suggest.

Thanks
Venkat

Re: [Edgex-golang] Community Feedback Required on PRs 1348 / 1359

Goodell, Leonard <leonard.goodell@...>
 

Ian makes a good point in that in the non-docker scenarios, the configuration.toml is easily modified to specify use of Redis. So in those cases having a new command line switch is redundant.

 

I still think for the Docker scenario, the command line switch is preferred. I’d like the DB settings override be in code rather than in the Docker file. Have the switch doesn’t make it required in the non-docker scenarios. There the configuration.toml can still be modified, rather than use the switch.

 

-Lenny

 

From: EdgeX-GoLang@... <EdgeX-GoLang@...> On Behalf Of Ian Johnson
Sent: Tuesday, May 21, 2019 11:47 AM
To: Goodell, Leonard <leonard.goodell@...>
Cc: Trevor.Conn@...; EdgeX-GoLang@...; edgex-tsc-core@...; EdgeX-Devel@...
Subject: Re: [Edgex-golang] Community Feedback Required on PRs 1348 / 1359

 

Apologies for the long email, but there's a bit of background necessary to explain why I don't think it's a good idea to use this command line flag in the snap.

 

With the snap we are in a better position than Docker to use the configuration files more directly because the configuration files live on the user's host file system and not inside the container where modification is tricky. Due to this, we try to strike a nice balance in the snap between allowing the user to modify the configuration files directly and also enable using consul for more automated device management tasks. Our current plan for managing this can be read about in issue #922.

 

Basically, in the snap we want to have 2 ways to run config-seed, one automated way which runs without the overwrite flag, and one manual way which runs with the overwrite flag. This allows a developer or manual user to easily change configuration for the services by modifying the files locally on their file system directly and then run a single command to push the content of the configuration from the filesystem into Consul.

 

I don’t think the addition of this new command line option will really help the snap at all. To explain that, let me explain how using redis currently would work. If a user wants to use redis our current mechanism would be just to have the user run:

 

snap set dbtype=redisdb

 

(or they could use the snapd REST API to do the same thing)

 

That then triggers a configuration hook to run, which processes the configuration.toml files locally for redis using a tool called remarshal, then pushes that configuration into consul so that an automated configuration management system can interact with consul to get/set configuration afterwards. This has the combined benefit of allowing a local user or developer to know what and where the configuration items that are set come from. They either originated from local configuration files or from something driving consul. The “snap set” mechanism described above merely is used as a transparent external means to modify something internal to EdgeX, the configuration files and Consul. It’s not meant to store state about EdgeX configuration, just to conveniently trigger those internal changes.

 

If we now consider using the command line flag that Trevor has proposed, if we are to use that option in the snap, we now need to track internally in the snap what database was configured with the “snap set” command above, and more importantly use that config item to determine what flags to launch config-seed with (dbtype=redis -> --database=redis, etc.). This means that in the snap, EdgeX now has an external system controlling configuration of the system and someone using the snap has a third dimension for control-plane/configuration:

 

  • Configuration files
  • Consul
  • Snapd configuration items

 

Previously, the snapd configuration items were only used to track what services were on/off, which is no different to someone using docker-compose to start/stop certain services and is a natural way to track this.

 

Our intention with the snap is to minimize the extent to which the snap distribution differs from the other distribution methods, but also to add convenient features which the snap can provide that aren’t available to docker-compose and native packaging methods. While it may be slightly simpler to add this command line flag for Redis to config-seed and use it within the snap, I think that it is simpler from a user’s perspective (as well as a documentation perspective) to have all the EdgeX configuration live within EdgeX. Having configuration items live in multiple locations significantly contributes to user confusion and complexity as the software ages and so trying to minimize this from the 1.0 release is beneficial.

 

 

 

On Tue, May 21, 2019 at 11:13 AM Goodell, Leonard <leonard.goodell@...> wrote:

I prefer PR 1348 as it works for native and snaps, not just Docker. Also, I think it is more straight forward than modifying the toml file during the Docker build.

 

-Lenny

 

From: EdgeX-GoLang@... <EdgeX-GoLang@...> On Behalf Of Trevor.Conn@...
Sent: Tuesday, May 21, 2019 8:41 AM
To: EdgeX-GoLang@...; edgex-tsc-core@...; EdgeX-Devel@...
Subject: [Edgex-golang] Community Feedback Required on PRs 1348 / 1359

 

Hi all --

 

As you're aware from recent working group calls we need to develop a mechanism whereby the service configurations can be easily overridden to use Redis for the upcoming Edinburgh release. In a deployment scenario, we do not want to require an admin to go in and modify the configuration files for Redis to be enabled. It needs to be accomplished either via the Snap, docker-compose or script for automation.

 

We have two PRs for issue 1347 that seek to solve this problem. One of them is mine and I don't think it's appropriate for me to make a unilateral decision, so I'm seeking feedback from the community as to which approach should be accepted.

 

To summarize each approach:

 

1.) PR 1359 utilizes changes to the config-seed Dockerfile that creates copies of the existing service configuration.toml files. It replaces the necessary properties in the duplicated files with Redis values and then injects these into the config-seed image in a different directory. To use this, a Redis-specific docker-compose.yml would utilize the existing --cmd parameter to point at the root location of the Redis configuration files. For example: 

config-seed:
    image: edgexfoundry/docker-core-config-seed-go:0.7.1
    container_name: edgex-config-seed
    command: ["/edgex/cmd/config-seed/config-seed --profile=docker --cmd=/edgex/cmd-redis"]

 

2.) PR 1348 defines a new command line parameter on the config-seed (--database / -d) that defaults to Mongo. If the service is started with "-d=redis" then the database connectivity parameters are overridden from the existing configuration.toml files prior to population of Consul.

 

The essential differences are:

  • Definition of a new command line parameter on the config-seed versus not.
  • Assumptions about whether the config-seed should perform this function in native deployments utilizing Consul.
    • 1348 will accomplish this. 1359 will put this responsibility on the admin since it only touches the Dockerfile.

Since code freeze is on Tuesday the 28th, we need a consensus from the community rather quickly. You are welcome to provide feedback here or on either PR. If no clear preference has emerged by the time we hold the Core Working Group call this Thursday, we will make a decision then.

 

Thanks.

 

Trevor Conn
Technical Staff Engineer

Core Working Group Chair of EdgeX Foundry

Dell Technologies | IoT DellTech
Trevor_Conn@...
Round Rock, TX  USA

Re: [Edgex-golang] Community Feedback Required on PRs 1348 / 1359

Michael Hall
 

Is there any real benefit from making this runtime configurable rather than deploy-time configurable? We could have an edgexfoundry-redis.snap (and docker-compose-redis.yml) that would always use Redis, right from the start.

The user would have to decide which they wanted to use before deploying, but it seems like that would be the case anyway. It could also give us some insights into the demand for one backend of the other, by looking at their separate download statistics.


Michael Hall
mhall119@...
On 5/21/19 2:47 PM, Ian Johnson wrote:

Apologies for the long email, but there's a bit of background necessary to explain why I don't think it's a good idea to use this command line flag in the snap.

With the snap we are in a better position than Docker to use the configuration files more directly because the configuration files live on the user's host file system and not inside the container where modification is tricky. Due to this, we try to strike a nice balance in the snap between allowing the user to modify the configuration files directly and also enable using consul for more automated device management tasks. Our current plan for managing this can be read about in issue #922.


Basically, in the snap we want to have 2 ways to run config-seed, one automated way which runs without the overwrite flag, and one manual way which runs with the overwrite flag. This allows a developer or manual user to easily change configuration for the services by modifying the files locally on their file system directly and then run a single command to push the content of the configuration from the filesystem into Consul.


I don’t think the addition of this new command line option will really help the snap at all. To explain that, let me explain how using redis currently would work. If a user wants to use redis our current mechanism would be just to have the user run:


snap set dbtype=redisdb


(or they could use the snapd REST API to do the same thing)


That then triggers a configuration hook to run, which processes the configuration.toml files locally for redis using a tool called remarshal, then pushes that configuration into consul so that an automated configuration management system can interact with consul to get/set configuration afterwards. This has the combined benefit of allowing a local user or developer to know what and where the configuration items that are set come from. They either originated from local configuration files or from something driving consul. The “snap set” mechanism described above merely is used as a transparent external means to modify something internal to EdgeX, the configuration files and Consul. It’s not meant to store state about EdgeX configuration, just to conveniently trigger those internal changes.


If we now consider using the command line flag that Trevor has proposed, if we are to use that option in the snap, we now need to track internally in the snap what database was configured with the “snap set” command above, and more importantly use that config item to determine what flags to launch config-seed with (dbtype=redis -> --database=redis, etc.). This means that in the snap, EdgeX now has an external system controlling configuration of the system and someone using the snap has a third dimension for control-plane/configuration:


  • Configuration files

  • Consul

  • Snapd configuration items


Previously, the snapd configuration items were only used to track what services were on/off, which is no different to someone using docker-compose to start/stop certain services and is a natural way to track this.


Our intention with the snap is to minimize the extent to which the snap distribution differs from the other distribution methods, but also to add convenient features which the snap can provide that aren’t available to docker-compose and native packaging methods. While it may be slightly simpler to add this command line flag for Redis to config-seed and use it within the snap, I think that it is simpler from a user’s perspective (as well as a documentation perspective) to have all the EdgeX configuration live within EdgeX. Having configuration items live in multiple locations significantly contributes to user confusion and complexity as the software ages and so trying to minimize this from the 1.0 release is beneficial.




On Tue, May 21, 2019 at 11:13 AM Goodell, Leonard <leonard.goodell@...> wrote:

I prefer PR 1348 as it works for native and snaps, not just Docker. Also, I think it is more straight forward than modifying the toml file during the Docker build.

 

-Lenny

 

From: EdgeX-GoLang@... <EdgeX-GoLang@...> On Behalf Of Trevor.Conn@...
Sent: Tuesday, May 21, 2019 8:41 AM
To: EdgeX-GoLang@...; edgex-tsc-core@...; EdgeX-Devel@...
Subject: [Edgex-golang] Community Feedback Required on PRs 1348 / 1359

 

Hi all --

 

As you're aware from recent working group calls we need to develop a mechanism whereby the service configurations can be easily overridden to use Redis for the upcoming Edinburgh release. In a deployment scenario, we do not want to require an admin to go in and modify the configuration files for Redis to be enabled. It needs to be accomplished either via the Snap, docker-compose or script for automation.

 

We have two PRs for issue 1347 that seek to solve this problem. One of them is mine and I don't think it's appropriate for me to make a unilateral decision, so I'm seeking feedback from the community as to which approach should be accepted.

 

To summarize each approach:

 

1.) PR 1359 utilizes changes to the config-seed Dockerfile that creates copies of the existing service configuration.toml files. It replaces the necessary properties in the duplicated files with Redis values and then injects these into the config-seed image in a different directory. To use this, a Redis-specific docker-compose.yml would utilize the existing --cmd parameter to point at the root location of the Redis configuration files. For example: 

config-seed:
    image: edgexfoundry/docker-core-config-seed-go:0.7.1
    container_name: edgex-config-seed
    command: ["/edgex/cmd/config-seed/config-seed --profile=docker --cmd=/edgex/cmd-redis"]

 

2.) PR 1348 defines a new command line parameter on the config-seed (--database / -d) that defaults to Mongo. If the service is started with "-d=redis" then the database connectivity parameters are overridden from the existing configuration.toml files prior to population of Consul.

 

The essential differences are:

  • Definition of a new command line parameter on the config-seed versus not.
  • Assumptions about whether the config-seed should perform this function in native deployments utilizing Consul.
    • 1348 will accomplish this. 1359 will put this responsibility on the admin since it only touches the Dockerfile.

Since code freeze is on Tuesday the 28th, we need a consensus from the community rather quickly. You are welcome to provide feedback here or on either PR. If no clear preference has emerged by the time we hold the Core Working Group call this Thursday, we will make a decision then.

 

Thanks.

 

Trevor Conn
Technical Staff Engineer

Core Working Group Chair of EdgeX Foundry

Dell Technologies | IoT DellTech
Trevor_Conn@...
Round Rock, TX  USA