-
Notifications
You must be signed in to change notification settings - Fork 120
RFC: install: charts for peerpods #2656
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Hi @beraldoleal, first of all, thanks for working on that! I only glanced at the changes and the first thing that catch to my eyes is you are grouping configuration for all cloud providers on same files. However, most of our code is oriented by cloud provider directories (e.g. src/cloud-api-adaptor/, src/cloud-providers/, etc...). It's common pull requests being reviewed by maintainers of provider X because the code changes files on provider's directory X; likewise running CI only to test X... is it possible to make helm charts modular per cloud provider too? Also, does this PR contain everything to proper replace the operator? Asking because I was thinking maybe we could run CI on that PR to see if it works fine |
I thought about splitting as well, but I'm not entirely convinced it's the right approach here. A few things have been bugging me:
For testing multiple providers in CI, we'd need multiple values files anyway, regardless of which approach we take. Said that... I guess my main question is: do we actually have templates that diverge from provider to provider, or is it just the variables that go into the ConfigMap? If it's only the values that differ and the templates work generically across all providers, I'm leaning toward keeping things together for now. But I'm open to being convinced otherwise. wdyt?
I haven't deployed yet. Its on my todo list for this week. But its a good idea to run a CI job. |
stevenhorsman
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for starting this @beraldoleal - I am very ignorant on helm, so really appreciated it. As Wainer has said - lets get some CI on this as soon as we can. I'm not sure about all the providers being together and whether that is confusing for folks that only deal with one of them, but we can canvas opinion wider in the community for this.
59f4f8c to
8e75f0b
Compare
|
Hi, @wainersm @stevenhorsman... I addressed your first comments, and this is a first look on how it could be. For now I did only gcp, to get early blockers/no-go from the community. As you agree with the approach, I will work on the next providers and with the remaining part of the chart it self.. right now its only the providers operands. |
stevenhorsman
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like many of these commits are for updating the config-extractor, so I think it would be good to handle them as a separate PR rather than merging them into the charts one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can these changes go into a prior commit in the tool update PR as although they are the mechanism to generate charts, I think separating the code out will help keep this commit clean (which I think is needed as I'm hoping to get some people who have more charts experience involved in the review and they don't need the distraction of the mechanism we use to create the templates). Does that make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hum.. I'm spliting the PR, no problem... but not sure about the Makefile changes. Those targets depends on the charts to exist. They are chart specific.
| app.kubernetes.io/managed-by: {{ .Release.Service }} | ||
| data: | ||
| CLOUD_PROVIDER: "{{ .Values.provider }}" | ||
| DISABLECVM: "{{ not .Values.confidential.enabled }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @beraldoleal !
User can set DISABLECVM in the provider's values. In this case it should honor that value, right?
i.e. if .Values.DISABLECVM then DISABLECVM else not .Values.confidential.enabled
Then in the loop for "all" configs below, it should skip DISABLECVM otherwise DISABLECVM will be set twice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, what I said above about the loop for "all" is wrong. DISABLECVM is in the provider specific config. We could move it to the common "all" configs right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @wainersm, yes, currently DISABLECVM is not a global/common code. Each provider implements its own. Because of that I don't think I would like to move to the global/common code in this PR, otherwise this PR will be a refactoring of the current code. Some providers doesn't have it, like docker, vsphere and ibmcloud-powervs.
That said, I'm leaning towards users not needing to be aware of these internal details. Whether a flag is per-provider or common code shouldn't be exposed to users. Even if a flag is in common code with a single default, users might need different values depending on their provider scenario, for instance,
PROXY_TIMEOUT or VXLAN_PORT might need different values for AWS vs libvirt. This seems a good reason to keep the all abstraction out of the chart and let each provider config be self-contained and explicit. WDYT?
User can set DISABLECVM in the provider's values. In this case it should honor that value, right?
Yes you are right. The reason I had confidential.enabled was as syntactic sugar, as a single toggle that could set multiple related configs. For instance, confidential.enabled=true could trigger DISABLECVM=false along with other CVM-related settings.
I really would like to avoid having users to set multiple places just to change something. However, I realized this abstraction is more relevant for downstream. Removing it for now, we can revisit if a clear use case emerges.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @wainersm, yes, currently
DISABLECVMis not a global/common code. Each provider implements its own. Because of that I don't think I would like to move to the global/common code in this PR, otherwise this PR will be a refactoring of the current code. Some providers doesn't have it, likedocker,vsphereandibmcloud-powervs.
True. DISABLEMCVM is not available in all providers.
That said, I'm leaning towards users not needing to be aware of these internal details. Whether a flag is per-provider or common code shouldn't be exposed to users. Even if a flag is in common code with a single default, users might need different values depending on their provider scenario, for instance,
PROXY_TIMEOUTorVXLAN_PORTmight need different values for AWS vs libvirt. This seems a good reason to keep the all abstraction out of the chart and let each provider config be self-contained and explicit. WDYT?
This seems a good reason to keep the all abstraction out of the chart ... I didn't understand what means to be out of the chart. :(
User can set DISABLECVM in the provider's values. In this case it should honor that value, right?
Yes you are right. The reason I had
confidential.enabledwas as syntactic sugar, as a single toggle that could set multiple related configs. For instance,confidential.enabled=truecould triggerDISABLECVM=falsealong with other CVM-related settings.I really would like to avoid having users to set multiple places just to change something. However, I realized this abstraction is more relevant for downstream. Removing it for now, we can revisit if a clear use case emerges.
Cool!
| @@ -0,0 +1,54 @@ | |||
| # TODO: This pre-install hook should be avoided! | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lacks me helm knowledge to fully understand the problem here. If you could explain a bit more...
My concern with this batch stuff, IIUC, is that we won't be able to easily test a change on peerpod-ctrl on pull requests as it is referencing a released version (maybe reference a commit fine, but even though it should be merged already).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, the plan is to eliminate this Job and pre-install hooks as well... instead, template the peerpod-ctrl resources directly in a specific peerpod-ctrl chart. That way helm uninstall properly cleans up everything, and we could test peerpod-ctrl changes in a PR just by running helm install from the branch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. So peerpod-ctrl will be addressed later.
|
Hi @beraldoleal ! The charts looks promising. I am only concerned with the pre-install hook which is calling kubectl as I mentioned above. Having this way it seems that helm does not add much value and actually makes deploy more complicated; I might be wrong as I don't know much about helm. Other than that, could we make a fully working version for docker or libvirt so that we could run it locally to experiment with? |
This is a first draft of our deployment via charts. CoCo project is moving from the operator to a umbrela chart that will have as dependency other charts, including kata-containers. This way we can list peerpods as dependency of that chart. Ideally, our kustomize folder could be replaced by this chart, to avoid duplication. But I would like to start simple with the chart first and replace later. Signed-off-by: Beraldo Leal <[email protected]>
Lets run the make target on CI to check for inconsistencies between the templates and the source code values. Signed-off-by: Beraldo Leal <[email protected]>
This is a first draft of our deployment via charts. CoCo project is moving from the operator to a umbrela chart that will have as dependency other charts, including kata-containers.
This way we can list peerpods as dependency of that chart.
Ideally, our kustomize folder could be replaced by this chart, to avoid duplication. But I would like to start simple with the chart first and replace later.