Automatic merge from submit-queue (batch tested with PRs 57282, 57484). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove dead code in cloudprovider
**What this PR does / why we need it**:
Remove dead code in `cloudprovider`
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55751, 57337, 56406, 56864, 57347). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
should reuse code rather than rewrite it
**What this PR does / why we need it**:
should reuse `dc.GetDatastoreByName()`, instead of rewrite it
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 56375, 56872, 57053, 57165, 57218). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Compare correct file names for volume detach operation
**What this PR does / why we need it**:
Current volume detach code compares volume path with disk path, as it is. This PR removes '.vmdk' extension from both paths and then compares them. This makes sure that correct comparison is done irrespective of a missing '.vmdk' extension from one of the paths.
**Which issue(s) this PR fixes**:
Fixes https://github.com/vmware/kubernetes/issues/392
**Special notes for your reviewer**:
Deployed cluster on vSphere and provisioned a static volume. Verified that a statically provisioned volume gets detached even when volume path didn't contain any .vmdk extension and disk path had .vmdk extension.
**Release note**:
```vSphere cloud provider: Fix detach operation for volumes, when .vmdk extension is not specified in volume path.```
Automatic merge from submit-queue (batch tested with PRs 56997, 57008, 56984, 56975, 56955). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove unused ScrubDNS interface from cloudprovider
**What this PR does / why we need it**:
DNS scrubber from kubelet has been removed in #36785 and cloudprovider's `ScrubDNS()` interface is not used anywhere.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#56953.
**Special notes for your reviewer**:
**Release note**:
```release-note
Remove ScrubDNS interface from cloudprovider.
```
Automatic merge from submit-queue (batch tested with PRs 56639, 56746, 56715, 56673, 56726). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix issue #390
**What this PR does / why we need it**:
When VM node is removed from vSphere Inventory, the corresponding Kubernetes node is unregistered and removed from registeredNodes cache in nodemanager. However, it is not removed from the other node info cache in nodemanager. The fix is to update the other cache accordingly.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes https://github.com/vmware/kubernetes/issues/390
**Special notes for your reviewer**:
Internally review PR here: https://github.com/vmware/kubernetes/pull/402
**Release note**:
```
NONE
```
Testing Done:
1. Removed the node VM from vSphere inventory.
2. Create storageclass and pvc to provision volume dynamically
- vsphere.conf (cloud-config) is now needed only on master node
- VCP uses OS hostname and not vSphere inventory name
- VCP is now resilient to VM inventory name change and VM migration
Automatic merge from submit-queue (batch tested with PRs 51409, 54616). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Implement InstanceExistsByProviderID() for cloud providers
Fix#51406
If cloud providers(like aws, gce etc...) implement ExternalID()
and support getting instance by ProviderID , they also implement
InstanceExistsByProviderID().
/assign wlan0
/assign @luxas
**Release note**:
```release-note
NONE
```
Fix#51406
If cloud providers(like aws, gce etc...) implement ExternalID()
and support getting instance by ProviderID , they also implement
InstanceExistsByProviderID().
Automatic merge from submit-queue (batch tested with PRs 51311, 52575, 53169). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Unable to detach the vSphere volume from Powered off node
With the existing implementation when a vSphere node is powered off, the node is not deleted by the node controller and is in "NotReady" state. Following the approach similar to GCE as mentioned here - https://github.com/kubernetes/kubernetes/issues/46442.
I observe the following issues:
- The pods on the powered off node are not **instantaneously** created on the other available node. Only after 5 minutes timeout, the pods will be created on other available nodes with the volume attached to it. This means an application downtime of around 5 minutes which is not good at all.
- The volume on the powered off node are not detached at all when the pod with the volume is already moved to other available node. Hence any attempt to restart the powered off node will fail as the same volume is attached to other node which is present on this powered off node. (Please note that the volumes are not automatically detached from powered off in vSphere as opposed to GCE, AWS where volume is automatically detached from when node is powered off).
So inorder to resolve this problem, we have decided to back with the approach where the powered off node will be removed by the Node controller. So the above 2 problems will be resolved as follows:
- Since the node is deleted, the pod on the powered off node becomes instantaneously available on other available nodes with the volume attached to the new nodes. Hence there is no application downtime at all.
- After a period of 6 minutes (timeout period), the volumes are automatically detached from the powered off node. Hence any restarts after 6 minutes on the powered off node would work and not cause any problems as volumes are already detached.
For now, we would want to go ahead with deleting the node from node controller when a node is powered off in vCenter until we have a better approach. I think the best possible solution would be to introduce power handler in volume controller to see if the node is powered off before we can take any appropriate for attach/detach operations.
```release-note
None
```
@jingxu97 @saad-ali @divyenpatel @luomiao @rohitjogvmw
Automatic merge from submit-queue
Mark volume as detached when node does not exist for vsphere
If node does not exist, node's volumes will be detached
automatically and become available. So mark them detached and
return false without error.
Fix#50266
**Special notes for your reviewer**:
/assign @jingxu97
**Release note**:
```release-note
NONE
```
vSphere has limitation of 80 characters for vmName.
with vsphere-k8s prefix and "vmdisk.volumeOptions.Name" vmName can become easily bigger than 80 chars.
Used hash funciton just of the "vmdisk.volumeOptions.Name" part as cleanup dummyVm logic depends on prefix "vsphere-k8s"
If node does not exist, node's volumes will be detached
automatically and become available. So mark them detached and
return false without error.
Fix#50266
Automatic merge from submit-queue
Initialize cloud providers with a K8s clientBuilder
**What this PR does / why we need it**:
This PR provides each cloud provider the ability to generate kubernetes clients. Either the full access or service account client builder is passed from the controller manager. Cloud providers could need to retrieve information from the cluster that isn't provided through defined interfaces, and this seems more preferable to adding parameters.
Please leave your thoughts/comments.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Same internal and external ip for vSphere Cloud Provider
Currently, vSphere Cloud Provider reports internal ip as container ip addresses. This PR modifies vSphere Cloud Provider to report same ip address as both internal and external that is provided by vmware infrastructure.
cc @pdhamdhere @tusharnt @BaluDontu @divyenpatel @luomiao
Automatic merge from submit-queue
Add approvers to vsphere cloudprovider
This PR adds approvers for vSphere Cloud provider.
cc @pdhamdhere @tusharnt @BaluDontu @divyenpatel @luomiao
Automatic merge from submit-queue
Filter out IPV6 addresses from NodeAddresses() returned by vSphere
The vSphere CP returns both IPV6 and IPV4 addresses for a Node as part of NodeAddresses() implementation. However, Kubelet fails due to duplicate api.NodeAddress value when the node has an IPV6 address associated with it. This issue is tracked in #42690. The following are observed:
- when we enabled the logs and checked the addresses sent by vSphere CP to Kubelet, we don't see any duplicate addresses at all.
- Also, kubelet_node_status doesn’t receive any duplicate address from cloud provider.
However, when we filter out the IPV6 addresses and only return IPV4 addresses to the Kubelet, it works perfectly fine.
Even though the Kubelet receives the non-duplicate node-addresses, it still errors out with duplicate node addresses. It might be an issue when kubelet propagates these addresses to API server (or) API server is enable to handle IPV6 addresses.
@divyenpatel @abrarshivani @pdhamdhere @tusharnt
**Release note**:
```release-note
None
```