-
Notifications
You must be signed in to change notification settings - Fork 159
Open
Description
Hey folks,
I am facing this issue with bess container running inside a k8s pod,
Environment:
Underlying OS- Ubuntu 20 LTS
On K8s cluster
Logs:
I0301 17:08:18.193297 871 bessctl.cc:487] *** All workers have been paused ***
I0301 17:08:18.731501 890 worker.cc:319] Worker 0(0x7f52a7ffd400) is running on core 0 (socket 0)
I0301 17:08:18.831060 889 pmd.cc:200] port id: 0matches vdev: net_af_packet0,iface=access,qpairs=1
W0301 17:08:18.831115 889 pmd.cc:389] Invalid socket, falling back...
I0301 17:08:18.831128 889 pmd.cc:392] Initializing Port:0 with memory from socket 0
I0301 17:08:18.831151 889 dpdk.cc:72] eth_dev_macaddr_set(): ioctl(SIOCSIFHWADDR) failed:Operation not permitted
I0301 17:08:19.003031 927 pmd.cc:200] port id: 1matches vdev: net_af_packet1,iface=core,qpairs=1
W0301 17:08:19.003074 927 pmd.cc:389] Invalid socket, falling back...
I0301 17:08:19.003087 927 pmd.cc:392] Initializing Port:1 with memory from socket 0
I0301 17:08:19.003109 927 dpdk.cc:72] eth_dev_macaddr_set(): ioctl(SIOCSIFHWADDR) failed:Operation not permitted
I0301 17:08:19.204418 1122 bessctl.cc:691] Checking scheduling constraints
E0301 17:08:19.204540 1122 module.cc:224] Mismatch in number of workers for module accessMerge min required 1 max allowed 64 attached workers 0
E0301 17:08:19.204568 1122 module.cc:224] Mismatch in number of workers for module accessQ0FastPO min required 1 max allowed 1 attached workers 0
E0301 17:08:19.204591 1122 module.cc:224] Mismatch in number of workers for module accessQSplit min required 1 max allowed 64 attached workers 0
E0301 17:08:19.204610 1122 module.cc:224] Mismatch in number of workers for module accessSrcEther min required 1 max allowed 64 attached workers 0
E0301 17:08:19.204627 1122 module.cc:224] Mismatch in number of workers for module access_measure min required 1 max allowed 64 attached workers 0
E0301 17:08:19.204639 1122 module.cc:224] Mismatch in number of workers for module coreMerge min required 1 max allowed 64 attached workers 0
E0301 17:08:19.204658 1122 module.cc:224] Mismatch in number of workers for module coreQ0FastPO min required 1 max allowed 1 attached workers 0
E0301 17:08:19.204674 1122 module.cc:224] Mismatch in number of workers for module coreQSplit min required 1 max allowed 64 attached workers 0
E0301 17:08:19.204692 1122 module.cc:224] Mismatch in number of workers for module coreSrcEther min required 1 max allowed 64 attached workers 0
E0301 17:08:19.204700 1122 module.cc:224] Mismatch in number of workers for module core_measure min required 1 max allowed 64 attached workers 0
W0301 17:08:19.206485 1123 metadata.cc:77] Metadata attr timestamp/8 of module access_measure has no upstream module that sets the value!
W0301 17:08:19.206530 1123 metadata.cc:77] Metadata attr timestamp/8 of module core_measure has no upstream module that sets the value!
I0301 17:08:19.206769 1123 bessctl.cc:516] *** Resuming ***
IP a (Inside pod):
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 9e:1c:34:fc:b7:c1 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.84.28/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::9c1c:34ff:fefc:b7c1/64 scope link
valid_lft forever preferred_lft forever
4: access@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether ee:0d:4d:b3:64:a8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.252.3/24 brd 192.168.252.255 scope global access
valid_lft forever preferred_lft forever
inet6 fe80::ec0d:4dff:feb3:64a8/64 scope link
valid_lft forever preferred_lft forever
5: core@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 36:6b:f4:4e:0f:fb brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.250.3/24 brd 192.168.250.255 scope global core
valid_lft forever preferred_lft forever
inet6 fe80::346b:f4ff:fe4e:ffb/64 scope link
valid_lft forever preferred_lft forever
and the weird part is, sometimes it runs without any issues.
I can provide you with more info, if req.
Thanks
Metadata
Metadata
Assignees
Labels
No labels