Skip to content

Conversation

@akagami-harsh
Copy link
Contributor

Description of your changes:

@pschoen-itsc
Copy link
Contributor

/lgtm

@google-oss-prow google-oss-prow bot removed the lgtm label Oct 23, 2025
# create bucket if not exists (ignore error if exists)
echo "s3.bucket.create --name mlpipeline" | /usr/bin/weed shell || true
# configure admin user using keys from secret
echo "s3.configure -user kubeflow-admin -access_key $accesskey -secret_key $secretkey -actions Admin -apply" | /usr/bin/weed shell
Copy link
Member

@juliusvonkohout juliusvonkohout Oct 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this print credentials in the logs? @akagami-harsh @pschoen-itsc

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, it doesn't

harshvir@msi:~/pipelines$ k logs -n kubeflow seaweedfs-6ff498588b-mf2tt 
I1023 12:32:30.289880 master.go:283 current: 10.244.0.66:9333 peers:
I1023 12:32:30.290030 file_util.go:27 Folder /data Permission: -rwxrwxrwx
I1023 12:32:30.290179 master.go:283 current: 10.244.0.66:9333 peers:10.244.0.66:9333
I1023 12:32:30.290196 file_util.go:27 Folder /data Permission: -rwxrwxrwx
I1023 12:32:30.290483 master_server.go:132 Volume Size Limit is 1024 MB
I1023 12:32:30.290721 master.go:164 Start Seaweed Master 30GB 3.92 7324cb717 at 10.244.0.66:9333
I1023 12:32:30.291084 volume_grpc_client_to_master.go:43 checkWithMaster 10.244.0.66:9333: get master 10.244.0.66:9333 configuration: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 10.244.0.66:19333: connect: connection refused"
I1023 12:32:30.291372 raft_server.go:119 Starting RaftServer with 10.244.0.66:9333
I1023 12:32:30.294863 raft_server.go:168 current cluster leader: 
W1023 12:32:31.291984 filer_server.go:159 skipping default store dir in /data/filerldb2
I1023 12:32:31.292147 filer_server.go:165 max_file_name_length 255
I1023 12:32:32.081664 volume_grpc_client_to_master.go:43 checkWithMaster 10.244.0.66:9333: get master 10.244.0.66:9333 configuration: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 10.244.0.66:19333: connect: connection refused"
I1023 12:32:32.291480 s3.go:220 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:32.291491 iam.go:64 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:33.292097 s3.go:220 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:33.292108 iam.go:64 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:33.873107 volume_grpc_client_to_master.go:43 checkWithMaster 10.244.0.66:9333: get master 10.244.0.66:9333 configuration: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 10.244.0.66:19333: connect: connection refused"
I1023 12:32:34.292674 iam.go:64 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:34.292688 s3.go:220 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:35.294023 s3.go:220 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:35.294038 iam.go:64 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:35.663792 volume_grpc_client_to_master.go:43 checkWithMaster 10.244.0.66:9333: get master 10.244.0.66:9333 configuration: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 10.244.0.66:19333: connect: connection refused"
I1023 12:32:36.294909 iam.go:64 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:36.294923 s3.go:220 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:37.295453 s3.go:220 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:37.295464 iam.go:64 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:37.455045 volume_grpc_client_to_master.go:43 checkWithMaster 10.244.0.66:9333: get master 10.244.0.66:9333 configuration: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 10.244.0.66:19333: connect: connection refused"
I1023 12:32:38.296332 s3.go:220 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:38.296344 iam.go:64 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:39.246551 volume_grpc_client_to_master.go:43 checkWithMaster 10.244.0.66:9333: get master 10.244.0.66:9333 configuration: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 10.244.0.66:19333: connect: connection refused"
I1023 12:32:39.297120 s3.go:220 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:39.297133 iam.go:64 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:40.297982 iam.go:64 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:40.297995 s3.go:220 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:41.037164 volume_grpc_client_to_master.go:43 checkWithMaster 10.244.0.66:9333: get master 10.244.0.66:9333 configuration: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 10.244.0.66:19333: connect: connection refused"
I1023 12:32:41.298794 s3.go:220 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:41.298793 iam.go:64 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:42.299258 iam.go:64 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:42.299267 s3.go:220 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:42.631784 master_server.go:209 [10.244.0.66:9333]  - is the leader.
I1023 12:32:42.632120 master.go:215 Start Seaweed Master 30GB 3.92 7324cb717 grpc server at 10.244.0.66:19333
I1023 12:32:43.300629 iam.go:64 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:43.300657 s3.go:220 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:44.133106 masterclient.go:170 No existing leader found!
I1023 12:32:44.133126 raft_server.go:190 Initializing new cluster
I1023 12:32:44.133161 master_server.go:181 leader change event:  => 10.244.0.66:9333
I1023 12:32:44.133184 master_server.go:184 [10.244.0.66:9333] 10.244.0.66:9333 becomes leader.
I1023 12:32:44.301681 iam.go:64 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:44.301695 s3.go:220 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:44.453717 master_grpc_server.go:372 + client [email protected]:9333
I1023 12:32:44.496553 volume_loading.go:98 readSuperBlock volume 14 version 3
I1023 12:32:44.496583 volume_loading.go:139 checking volume data integrity for volume 14
I1023 12:32:44.496613 volume_loading.go:157 loading memory index /data/mlpipeline_14.idx to memory
I1023 12:32:44.496645 volume_loading.go:98 readSuperBlock volume 12 version 3
I1023 12:32:44.496671 volume_loading.go:139 checking volume data integrity for volume 12
I1023 12:32:44.496719 disk_location.go:190 data file /data/mlpipeline_14.dat, replication=000 v=3 size=106328 ttl=
I1023 12:32:44.496719 volume_loading.go:98 readSuperBlock volume 7 version 3
I1023 12:32:44.496736 volume_loading.go:98 readSuperBlock volume 2 version 3
I1023 12:32:44.496742 volume_loading.go:98 readSuperBlock volume 6 version 3
I1023 12:32:44.496755 volume_loading.go:139 checking volume data integrity for volume 7
I1023 12:32:44.496760 volume_loading.go:139 checking volume data integrity for volume 2
I1023 12:32:44.496767 volume_loading.go:139 checking volume data integrity for volume 6
I1023 12:32:44.496775 volume_loading.go:157 loading memory index /data/7.idx to memory
I1023 12:32:44.496836 volume_loading.go:98 readSuperBlock volume 1 version 3
I1023 12:32:44.496841 disk_location.go:190 data file /data/7.dat, replication=000 v=3 size=77968 ttl=
I1023 12:32:44.496852 volume_loading.go:98 readSuperBlock volume 13 version 3
I1023 12:32:44.496866 volume_loading.go:139 checking volume data integrity for volume 1
I1023 12:32:44.496873 volume_loading.go:139 checking volume data integrity for volume 13
I1023 12:32:44.496887 volume_loading.go:157 loading memory index /data/1.idx to memory
I1023 12:32:44.496890 volume_loading.go:157 loading memory index /data/mlpipeline_13.idx to memory
I1023 12:32:44.496889 volume_loading.go:98 readSuperBlock volume 3 version 3
I1023 12:32:44.496908 volume_loading.go:139 checking volume data integrity for volume 3
I1023 12:32:44.496921 disk_location.go:190 data file /data/mlpipeline_13.dat, replication=000 v=3 size=71000 ttl=
I1023 12:32:44.496958 disk_location.go:190 data file /data/1.dat, replication=000 v=3 size=41560 ttl=
I1023 12:32:44.497037 volume_loading.go:98 readSuperBlock volume 5 version 3
I1023 12:32:44.497058 volume_loading.go:139 checking volume data integrity for volume 5
I1023 12:32:44.497076 volume_loading.go:98 readSuperBlock volume 10 version 3
I1023 12:32:44.497099 volume_loading.go:139 checking volume data integrity for volume 10
I1023 12:32:44.497128 volume_loading.go:98 readSuperBlock volume 4 version 3
I1023 12:32:44.497154 volume_loading.go:139 checking volume data integrity for volume 4
I1023 12:32:44.497173 volume_loading.go:98 readSuperBlock volume 11 version 3
I1023 12:32:44.497196 volume_loading.go:139 checking volume data integrity for volume 11
I1023 12:32:44.497213 volume_loading.go:157 loading memory index /data/mlpipeline_11.idx to memory
I1023 12:32:44.497247 disk_location.go:190 data file /data/mlpipeline_11.dat, replication=000 v=3 size=69288 ttl=
I1023 12:32:44.497329 volume_loading.go:98 readSuperBlock volume 8 version 3
I1023 12:32:44.497348 volume_loading.go:139 checking volume data integrity for volume 8
I1023 12:32:44.497365 volume_loading.go:157 loading memory index /data/mlpipeline_8.idx to memory
I1023 12:32:44.497399 disk_location.go:190 data file /data/mlpipeline_8.dat, replication=000 v=3 size=92112 ttl=
I1023 12:32:44.497583 volume_loading.go:157 loading memory index /data/2.idx to memory
I1023 12:32:44.497653 disk_location.go:190 data file /data/2.dat, replication=000 v=3 size=87464 ttl=
I1023 12:32:44.497670 volume_loading.go:157 loading memory index /data/mlpipeline_12.idx to memory
I1023 12:32:44.497698 volume_loading.go:98 readSuperBlock volume 9 version 3
I1023 12:32:44.497720 volume_loading.go:139 checking volume data integrity for volume 9
I1023 12:32:44.497734 volume_loading.go:157 loading memory index /data/3.idx to memory
I1023 12:32:44.497739 volume_loading.go:157 loading memory index /data/mlpipeline_9.idx to memory
I1023 12:32:44.497740 volume_loading.go:157 loading memory index /data/6.idx to memory
I1023 12:32:44.497743 volume_loading.go:157 loading memory index /data/4.idx to memory
I1023 12:32:44.497740 disk_location.go:190 data file /data/mlpipeline_12.dat, replication=000 v=3 size=67584 ttl=
I1023 12:32:44.497774 volume_loading.go:157 loading memory index /data/mlpipeline_10.idx to memory
I1023 12:32:44.497776 disk_location.go:190 data file /data/6.dat, replication=000 v=3 size=32032 ttl=
I1023 12:32:44.497786 disk_location.go:190 data file /data/3.dat, replication=000 v=3 size=48560 ttl=
I1023 12:32:44.497791 disk_location.go:190 data file /data/4.dat, replication=000 v=3 size=27912 ttl=
I1023 12:32:44.497796 disk_location.go:190 data file /data/mlpipeline_9.dat, replication=000 v=3 size=80768 ttl=
I1023 12:32:44.497799 disk_location.go:190 data file /data/mlpipeline_10.dat, replication=000 v=3 size=91680 ttl=
I1023 12:32:44.497943 volume_loading.go:157 loading memory index /data/5.idx to memory
I1023 12:32:44.497965 disk_location.go:190 data file /data/5.dat, replication=000 v=3 size=85520 ttl=
I1023 12:32:44.497976 disk_location.go:246 Store started on dir: /data with 14 volumes max 0
I1023 12:32:44.498112 disk_location.go:249 Store started on dir: /data with 0 ec shards
I1023 12:32:44.498172 volume_grpc_client_to_master.go:52 Volume server start with seed master nodes: [10.244.0.66:9333]
I1023 12:32:44.498250 volume.go:382 Start Seaweed volume server 30GB 3.92 7324cb717 at 10.244.0.66:8080
I1023 12:32:44.498622 volume_grpc_client_to_master.go:109 Heartbeat to: 10.244.0.66:9333
I1023 12:32:44.498867 node.go:250 topo adds child DefaultDataCenter
I1023 12:32:44.498878 node.go:250 topo:DefaultDataCenter adds child DefaultRack
I1023 12:32:44.498887 node.go:250 topo:DefaultDataCenter:DefaultRack adds child 10.244.0.66:8080
I1023 12:32:44.498895 node.go:250 topo:DefaultDataCenter:DefaultRack:10.244.0.66:8080 adds child 
I1023 12:32:44.498900 master_grpc_server.go:141 added volume server 0: 10.244.0.66:8080 [c812b0d4-1b96-4da4-8846-14c33dfdb284]
I1023 12:32:44.498918 master_grpc_server.go:51 found new uuid:10.244.0.66:8080 [c812b0d4-1b96-4da4-8846-14c33dfdb284] , map[10.244.0.66:8080:[c812b0d4-1b96-4da4-8846-14c33dfdb284]]
I1023 12:32:44.499044 volume_layout.go:417 Volume 14 becomes writable
I1023 12:32:44.499051 volume_layout.go:417 Volume 11 becomes writable
I1023 12:32:44.499054 volume_layout.go:417 Volume 8 becomes writable
I1023 12:32:44.499056 volume_layout.go:417 Volume 9 becomes writable
I1023 12:32:44.499059 volume_layout.go:417 Volume 10 becomes writable
I1023 12:32:44.499062 volume_layout.go:417 Volume 5 becomes writable
I1023 12:32:44.499064 volume_layout.go:417 Volume 7 becomes writable
I1023 12:32:44.499067 volume_layout.go:417 Volume 13 becomes writable
I1023 12:32:44.499069 volume_layout.go:417 Volume 1 becomes writable
I1023 12:32:44.499071 volume_layout.go:417 Volume 2 becomes writable
I1023 12:32:44.499074 volume_layout.go:417 Volume 12 becomes writable
I1023 12:32:44.499076 volume_layout.go:417 Volume 6 becomes writable
I1023 12:32:44.499078 volume_layout.go:417 Volume 3 becomes writable
I1023 12:32:44.499081 volume_layout.go:417 Volume 4 becomes writable
I1023 12:32:44.499086 master_grpc_server.go:204 master see new volume 14 from 10.244.0.66:8080
I1023 12:32:44.499091 master_grpc_server.go:204 master see new volume 11 from 10.244.0.66:8080
I1023 12:32:44.499093 master_grpc_server.go:204 master see new volume 8 from 10.244.0.66:8080
I1023 12:32:44.499095 master_grpc_server.go:204 master see new volume 9 from 10.244.0.66:8080
I1023 12:32:44.499096 master_grpc_server.go:204 master see new volume 10 from 10.244.0.66:8080
I1023 12:32:44.499103 master_grpc_server.go:204 master see new volume 5 from 10.244.0.66:8080
I1023 12:32:44.499105 master_grpc_server.go:204 master see new volume 7 from 10.244.0.66:8080
I1023 12:32:44.499107 master_grpc_server.go:204 master see new volume 13 from 10.244.0.66:8080
I1023 12:32:44.499109 master_grpc_server.go:204 master see new volume 1 from 10.244.0.66:8080
I1023 12:32:44.499110 master_grpc_server.go:204 master see new volume 2 from 10.244.0.66:8080
I1023 12:32:44.499112 master_grpc_server.go:204 master see new volume 12 from 10.244.0.66:8080
I1023 12:32:44.499114 master_grpc_server.go:204 master see new volume 6 from 10.244.0.66:8080
I1023 12:32:44.499115 master_grpc_server.go:204 master see new volume 3 from 10.244.0.66:8080
I1023 12:32:44.499117 master_grpc_server.go:204 master see new volume 4 from 10.244.0.66:8080
I1023 12:32:45.299071 leveldb2_store.go:43 filer store leveldb2 dir: /data/filerldb2
I1023 12:32:45.299096 file_util.go:27 Folder /data/filerldb2 Permission: -rwxr-xr-x
I1023 12:32:45.299705 master_grpc_server.go:372 + client [email protected]:8888
I1023 12:32:45.299888 masterclient.go:267 + [email protected]:9333 noticed .filer 10.244.0.66:8888
I1023 12:32:45.302505 s3.go:220 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:45.302522 iam.go:64 wait to connect to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:45.435180 filer.go:159 existing filer.store.id = -1451374703
I1023 12:32:45.435192 configuration.go:28 configured filer store to leveldb2
I1023 12:32:45.435611 master_client.go:20 the cluster has 1 filer
I1023 12:32:45.435625 filer.go:113 10.244.0.66:8888 aggregate from peers [10.244.0.66:8888]
I1023 12:32:45.477930 meta_aggregator.go:76 loopSubscribeToOneFiler read 10.244.0.66:8888 start from 2025-10-23 12:31:45.435617368 +0000 UTC 1761222705435617368
I1023 12:32:45.478383 meta_aggregator.go:92 subscribing remote 10.244.0.66:8888 meta change: connecting to peer filer 10.244.0.66:8888: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 10.244.0.66:18888: connect: connection refused"
I1023 12:32:45.478491 filer.go:352 Start Seaweed Filer 30GB 3.92 7324cb717 at 10.244.0.66:8888
I1023 12:32:46.304038 s3.go:216 S3 read filer buckets dir: /buckets
I1023 12:32:46.304049 s3.go:223 connected to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:46.304038 iam.go:60 IAM read filer configuration: masters:"10.244.0.66:9333"  max_mb:4  dir_buckets:"/buckets"  signature:-1451374703  metrics_interval_sec:15  version:"30GB 3.92 7324cb717"  major_version:3  minor_version:92
I1023 12:32:46.304066 iam.go:67 connected to filer 10.244.0.66:8888 grpc address 10.244.0.66:18888
I1023 12:32:46.304554 iam.go:80 NewIamApiServer created
I1023 12:32:46.304716 iam.go:93 Start Seaweed IAM API Server 30GB 3.92 7324cb717 at http port 8111
I1023 12:32:46.304827 s3api_circuit_breaker.go:35 s3 circuit breaker not configured
I1023 12:32:46.305973 s3.go:373 Start Seaweed S3 API Server 30GB 3.92 7324cb717 at http port 8333
I1023 12:32:46.306134 filer_grpc_server_sub_meta.go:332 +  listener [email protected]:58480 clientId -317266181 clientEpoch 2
I1023 12:32:46.306141 filer_grpc_server_sub_meta.go:45  [email protected]:58480 starts to subscribe /etc/ from 2025-10-23 12:32:46.304064601 +0000 UTC
I1023 12:32:46.342584 master_grpc_server.go:372 + client .adminShell@366bf7fb-9bd7-490c-bc4d-1534245d5d14
I1023 12:32:46.343321 master_grpc_server.go:387 - client .adminShell@366bf7fb-9bd7-490c-bc4d-1534245d5d14
I1023 12:32:46.343744 master_grpc_server.go:372 + client .adminShell@189b5042-fe14-4c29-a344-4fe56c7eb036
I1023 12:32:46.427432 master_grpc_server.go:387 - client .adminShell@189b5042-fe14-4c29-a344-4fe56c7eb036
I1023 12:32:46.451210 master_grpc_server.go:372 + client .adminShell@d04231ae-5e37-482e-83e8-062b187beac1
I1023 12:32:46.451805 master_grpc_server.go:387 - client .adminShell@d04231ae-5e37-482e-83e8-062b187beac1
I1023 12:32:46.452201 master_grpc_server.go:372 + client .adminShell@0bdf5363-dd92-417c-a36f-342ce27dbc26
I1023 12:32:46.514188 master_grpc_server.go:387 - client .adminShell@0bdf5363-dd92-417c-a36f-342ce27dbc26
I1023 12:32:47.212000 meta_aggregator.go:76 loopSubscribeToOneFiler read 10.244.0.66:8888 start from 2025-10-23 12:31:45.435617368 +0000 UTC 1761222705435617368
I1023 12:32:47.212386 meta_aggregator.go:178 subscribing remote 10.244.0.66:8888 meta change: 2025-10-23 12:31:45.435617368 +0000 UTC, clientId:165656381
I1023 12:32:47.212866 filer_grpc_server_sub_meta.go:332 + local listener filer:10.244.0.66:[email protected]:58490 clientId -165656381 clientEpoch 1
I1023 12:32:47.212874 filer_grpc_server_sub_meta.go:143  + filer:10.244.0.66:[email protected]:58490 local subscribe / from 2025-10-23 12:31:45.435617368 +0000 UTC clientId:-165656381
I1023 12:32:47.212881 filer_grpc_server_sub_meta.go:156 read on disk filer:10.244.0.66:[email protected]:58490 local subscribe / from 2025-10-23 12:31:45.435617368 +0000 UTC
I1023 12:32:47.212976 filer_grpc_server_sub_meta.go:175 read in memory filer:10.244.0.66:[email protected]:58490 local subscribe / from 2025-10-23 12:31:45.435617368 +0000 UTC
I1023 12:32:47.213177 auth_credentials_subscribe.go:84 updated bucketMetadata /buckets/name:"mlpipeline"  is_directory:true  attributes:{mtime:1761222766  file_mode:2147484159  crtime:1760966148}
I1023 12:32:47.213235 auth_credentials_subscribe.go:63 updated /etc/iam/identity.json

@juliusvonkohout
Copy link
Member

@droctothorpe for approval if https://github.com/kubeflow/pipelines/pull/12387/files#r2454944355 is clarified
otherwise
/lgtm

@droctothorpe
Copy link
Collaborator

droctothorpe commented Oct 23, 2025

/lgtm

Will merge pending the answer to @juliusvonkohout 's question.

@akagami-harsh
Copy link
Contributor Author

/retest

@google-oss-prow
Copy link

New changes are detected. LGTM label has been removed.

@google-oss-prow
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from juliusvonkohout. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@juliusvonkohout
Copy link
Member

@droctothorpe can you help with the tests?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants