中心化部署模式下增加Lite节点报错 #1891
Yaojh20302
started this conversation in
General
Replies: 2 comments 1 reply
-
|
我这边遇到过类似的情况,docker显示up+restarting 然后不断wait 端口是否冲突,install.sh 文件里是否写死了端口值? |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
官网下载下来的安装包解压安装的,install.sh未做任何改动
在 2025-06-10 17:27:36,"baiJoker" ***@***.***> 写道:
我这边遇到过类似的情况,docker显示up+restarting 然后不断wait 端口是否冲突,install.sh 文件里是否写死了端口值?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
两台主机(202.117.45.44、202.117.45.40)以中心化部署模式部署SecretPad-All-InOne,45.44作为Master节点部署成功并能正常使用,现在45.40上希望部署一个独立的Lite节点,按照官网说明进行部署,运行命令:
./install.sh lite -n iprnvgnt -m 'http://202.117.45.44:18080' -t nmgA2YmK7lWqRsgxgPRfPDN4sHfdtHqH -p 10080 -k 41082 -g 41083 -s 8180 -q 13181 -P notls
报错信息如下:
network=kuscia-exchange
ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
Error response from daemon: Container ca51c4c1324599a74e08f9c5ce86609db3e0a8862a5e37f9b4fe719ddbf236a4 is restarting, wait until the container is running
[Error] Probe datamesh in container 'root-kuscia-lite-iprnvgnt' failed.
You cloud run command that 'docker logs root-kuscia-lite-iprnvgnt' to check the log
Docker容器中错误信息如下:
2025-06-04 19:02:04.244 INFO start/manager.go:357 [Module] coredns notified to exit...
ts=2025-06-04T11:02:04.244Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
2025-06-04 19:02:04.244 WARN supervisor/supervisor.go:175 Context done, begin to stop process [containerd][32]
ts=2025-06-04T11:02:04.244Z caller=node_exporter.go:195 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
2025-06-04 19:02:04.244 INFO start/manager.go:342 [Module] coredns is successful finished
ts=2025-06-04T11:02:04.244Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|sys|var|run|boot|/lib/docker/.+|var/lib/kubelet/.+)($|/)
ts=2025-06-04T11:02:04.244Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs|tmpfs)$
2025-06-04 19:02:04.244 WARN supervisor/supervisor.go:155 Process [containerd][32] exit with error: signal: interrupt
ts=2025-06-04T11:02:04.244Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
2025-06-04 19:02:04.244 WARN supervisor/supervisor.go:110 [containerd] run process failed, detail -> process [containerd][32] only existed 1 ms, less than 3000 ms, with error: signal: interrupt
2025-06-04 19:02:04.244 INFO start/manager.go:339 [Module] containerd is finished with err=startup process failed at first time, so stop at once, error: process [containerd][32] only existed 1 ms, less than 3000 ms, with error: signal: interrupt
2025-06-04 19:02:04.244 INFO start/manager.go:371 Current step modules are finished now
2025-06-04 19:02:04.244 INFO start/manager.go:357 [Module] envoy notified to exit...
2025-06-04 19:02:04.244 INFO start/manager.go:357 [Module] domainroute notified to exit...
2025-06-04 19:02:04.244 INFO start/manager.go:357 [Module] nodeexporter notified to exit...
2025-06-04 19:02:04.244 WARN supervisor/supervisor.go:175 Context done, begin to stop process [envoy][23]
ts=2025-06-04T11:02:04.244Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
2025-06-04 19:02:04.244 WARN modules/envoy.go:138 Context done, exit logRotate
ts=2025-06-04T11:02:04.244Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
ts=2025-06-04T11:02:04.244Z caller=node_exporter.go:117 level=info collector=cpu
2025-06-04 19:02:04.244 WARN supervisor/supervisor.go:175 Context done, begin to stop process [node_exporter][25]
ts=2025-06-04T11:02:04.244Z caller=node_exporter.go:117 level=info collector=diskstats
2025-06-04 19:02:04.244 WARN supervisor/supervisor.go:155 Process [node_exporter][25] exit with error: signal: interrupt
2025-06-04 19:02:04.244 WARN supervisor/supervisor.go:155 Process [envoy][23] exit with error: signal: interrupt
2025-06-04 19:02:04.244 WARN supervisor/supervisor.go:110 [node_exporter] run process failed, detail -> process [node_exporter][25] only existed 3 ms, less than 3000 ms, with error: signal: interrupt
2025-06-04 19:02:04.244 WARN supervisor/supervisor.go:110 [envoy] run process failed, detail -> process [envoy][23] only existed 3 ms, less than 3000 ms, with error: signal: interrupt
2025-06-04 19:02:04.244 INFO start/manager.go:339 [Module] nodeexporter is finished with err=startup process failed at first time, so stop at once, error: process [node_exporter][25] only existed 3 ms, less than 3000 ms, with error: signal: interrupt
2025-06-04 19:02:04.244 INFO start/manager.go:339 [Module] envoy is finished with err=startup process failed at first time, so stop at once, error: process [envoy][23] only existed 3 ms, less than 3000 ms, with error: signal: interrupt
2025-06-04 19:02:04.244 INFO start/manager.go:371 Current step modules are finished now
2025-06-04 19:02:04.244 INFO start/start.go:125 Kuscia Instance [iprnvgnt] shut down
Beta Was this translation helpful? Give feedback.
All reactions