Solved

Manually adjust the kube-apiserver?


Userlevel 1
Badge +2

Hi,

It’s possible to manually adjust the file /var/nutanix/etc/kubernetes/manifests/kube-apiserver.yaml and apply this update into the kubernetes cluster?

I tried to adjust and after run:

sudo systemctl daemon-reload && sudo systemctl restart kubelet-master

But when I describe the kube-api pod I see that the adjusts are not applied.

Anibal

icon

Best answer by Anibal Ulisses 10 July 2020, 16:04

Hello @crisj , yes I can share.

Basically are needed to adjust 2 files.

First file located at /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/kubernetes/helm.rb

Original file:

# frozen_string_literal: true

module Gitlab
  module Kubernetes
    module Helm
      HELM_VERSION = '2.16.6'
      KUBECTL_VERSION = '1.13.12'
      NAMESPACE = 'gitlab-managed-apps'
      NAMESPACE_LABELS = { 'app.gitlab.com/managed_by' => :gitlab }.freeze
      SERVICE_ACCOUNT = 'tiller'
      CLUSTER_ROLE_BINDING = 'tiller-admin'
      CLUSTER_ROLE = 'cluster-admin'
    end
  end
end
 

 

Adjusted file:

# frozen_string_literal: true

module Gitlab
  module Kubernetes
    module Helm
      HELM_VERSION = '2.16.6'
      KUBECTL_VERSION = '1.13.12'
      HTTP_PROXY= 'http://<proxy-ip>:3128'
      HTTPS_PROXY= 'http://<proxy-ip>:3128'
      NO_PROXY= "172.19.0.0/16, 172.20.0.0/16, <my-network-ip-range>/16, localhost, 127.0.0.1"
 
    NAMESPACE = 'gitlab-managed-apps'
      NAMESPACE_LABELS = { 'app.gitlab.com/managed_by' => :gitlab }.freeze
      SERVICE_ACCOUNT = 'tiller'
      CLUSTER_ROLE_BINDING = 'tiller-admin'
      CLUSTER_ROLE = 'cluster-admin'
    end
  end
end
 

 

for the NO_PROXY, the 1st and 2nd ip-range are the internal k8s network range, it’s really important to keep the space after the comma, without this space it’s doesn’t work. I worked hard to identify this space.

 

Second file located at /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/kubernetes/helm/pod.rd

Original part of the file that need to be adjusted:

def generate_pod_env(command)
          {
            HELM_VERSION: Gitlab::Kubernetes::Helm::HELM_VERSION,
            TILLER_NAMESPACE: namespace_name,
            COMMAND_SCRIPT: command.generate_script
          }.map { |key, value| { name: key, value: value } }
        end

 

Adjusted part of the file:

def generate_pod_env(command)
          {
            HELM_VERSION: Gitlab::Kubernetes::Helm::HELM_VERSION,
            HTTP_PROXY: Gitlab::Kubernetes::Helm::HTTP_PROXY,
            HTTPS_PROXY: Gitlab::Kubernetes::Helm::HTTPS_PROXY,
            NO_PROXY: Gitlab::Kubernetes::Helm::NO_PROXY,

            TILLER_NAMESPACE: namespace_name,
            COMMAND_SCRIPT: command.generate_script
          }.map { |key, value| { name: key, value: value } }
        end

 

With this adjustments I can deploy  the helm tiller and all other applications with success.

Now I’m testing the deploy.

Anibal

View original

This topic has been closed for comments

19 replies

Userlevel 1
Badge +5

Hello @Anibal Ulisses ,

Karbon deploys “Managed” Kubernetes environments. What we mean by that is those environments are setup by Nutanix using our best practices and supported by us. The tradeoff is that no manual customization is possible unless instructed by Nutanix Support.

If you are looking for a customizable Kubernets solution that you intend to maintain on your own, then I would suggest getting in touch with your Nutanix account team and looking into Nutanix Calm, which by default has a Kubernetes blueprint that you can customize.

You’ll be responsible for supporting your Kubernetes cluster after that, but it gives you a fast start that is easily customizable to your needs.

Best regards,

Badge +1

Hi @Anibal Ulisses Could you please let me know what modification you would like to apply to the Kubernetes cluster?

Modifying /var/nutanix/etc/kubernetes/manifests/kube-apiserver.yaml file is not supported. These files are normally overwritten during k8s upgrade or host upgrade.

Userlevel 1
Badge +2

@Tapati basically I only want to use PodPreset (https://kubernetes.io/docs/concepts/workloads/pods/podpreset/) It’s needed for the GitLab integration, due they need to deploy few pod that need to access internet and I need to conect to internet over a proxy.

@vshuguet If need to to move to Calm I need to think about to user also another solutions, like OpenShift or Suse CaaS plataform, and I want to keep with Karbon until now...

Userlevel 1
Badge +5

Where are you pulling your pod/containers images from? Internet or an internal repo?

If you’re pulling them from the Internet, then I don’t see the need for a proxy, so I’m assuming it is from an internal repo?

If so, then you could “build” a container image internally that would “bake in” those settings (assuming env variables like “http_proxy” and “https_proxy”?) then use those to deploy, so that you do not need a PodPreset?

We can look at enabling PodPreset for a future version of Karbon. Could you open a Request For Enhancement (RFE) ticket on your support portal so that we can track this appropriately?

Best regards,

Userlevel 1
Badge +2

@vshuguet Basically I’m trying to deploy it from the GitLab, they try to deploy Helm Tiller and a pod need to access the internet and it’s comes do be my problem.

If you take a look I already post some questions about it.

Ok, I can open a RFE, but I need to fix the integration between GitLab and Karbon, over proxy, in 2 weeks, I’m not sure if a RFE will be solved in this time.. Without any way I really need to think about to move out from Karbon, and I really didn’t want to do it...

Userlevel 1
Badge +5

2 weeks isn’t enough time to implement a new feature, test it properly and release it, while we have other development in flight, for sure.

We can however look at it for futur releases.

I haven’t deployed GitLab intergation with Kubernetes in a while, but I’m pretty sure some of our other customers have, so there ought to be a way to do that without a PodPreset, I’m just not exactly sure how.

Are you using the OpenSource version of GitLab or do you have support from them?
If you have support from them you could reach out and explore other options with them.

Userlevel 1
Badge +2

@vshuguet I’m using the GitLab with support, and I also open a ticket with GitLab and installation without internet access are not supported. They can only point the PodPreset that can solve my problem.

I really didn’t understand why on this case https://next.nutanix.com/karbon-kubernetes-service-30/istio-service-mesh-on-nutanix-karbon-32639 are suggested to adjust the kube-api, restart it and works. and If I try didn’t work for me.

unfortunately I think that I need to move away from Karbon and If I start with another solution will be hard to move back.

Userlevel 1
Badge +2

@vshuguet RFE case open, let’s see.

Userlevel 4
Badge +1

@Anibal Ulisses the post you are referring to has a disclaimer at the very beginning about not being supported and only shared for demo purposes. 

Don’t know if this would work, but you could use in your GitLab-Gitaly chart the parameter extraEnv to pass OS proxy configuration like you would do in a typical Linux VM. Usually this is done in the Dockerfile, but maybe in this case could work too.

extraEnv:
HTTP_PROXY: "http://1.1.1.1:3128"
HTTPS_PROXY: "http://1.1.1.1:3128"
NO_PROXY: "localhost,127.0.0.1,<lan_IPs>"

https://docs.gitlab.com/charts/charts/gitlab/gitaly/index.html

Userlevel 4
Badge +1

For additional reference just found this in Gitlab website.

https://gitlab.com/gitlab-org/charts/gitlab/-/issues/1399

Badge +3

Hello @Anibal Ulisses , i never implement the exact same scenario by myself but you can look at the following items to try to solve your problem:

  • look official gitlab documentation to try to find a workaround (exemple of Jose above)
  • try a full offline approach , this is often the best way when you have complex network situation and use a local replicated registry
  • if you want modify on the fly pod deployment rather look the MutatingWebhook logic, it is a stable kubernetes feature, PodPreset feature is in alpha state and never evolve since k8s 1.6

Best Regards

Userlevel 1
Badge +2

hi @JoseNutanix / @crisj 

 nothing worked for me. basically helm tiller installation, from gitlab CI/CD integration, are not getting the proxy information, due this I can’t have a success installation.

@JoseNutanix on the second link that you send to me, it’s work fine to integrate gitlab with the proxy, but doesn’t enable the helm tiller pod installation to get internet access into the proxy.

@crisj workaround doesn’t work, offline approach are not informed by gitlab… mutatingwebhook are not accepting the proxy information.

Badge +3

HI @Anibal Ulisses 

how did you implement mutatingwebhook ? because it works with any settings you want to implement

 

you can look an exemple here https://github.com/tuxtof/nodeaffinity-webhook

where i inject nodeAffinity in certain condition

 

you can mimic the same idea to add additonal proxy env variable to your pod, it is a little bit complex, but if gitlab don’t propose by himself a proper solution or offline approach i dont have any other idea

Userlevel 1
Badge +2

@crisj the enforce to configure it’s really high, I trying to do my best but almost cases are to enable proxy for pods communication, I didn’t see any example or use case like my, that I need to pass a variable into the pod just to enable then to get internet access and finish the installation.. podpreset are really more simple.

Userlevel 1
Badge +2

I solved the issue with code adjusting at gitlab, by my self, that worked fine.

Badge +3

Hello @Anibal Ulisses 

that’s a very good news, can you say a little bit more how you solve this gitlab problem, always interesting to know

 

Best regards

Userlevel 1
Badge +2

Hello @crisj , yes I can share.

Basically are needed to adjust 2 files.

First file located at /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/kubernetes/helm.rb

Original file:

# frozen_string_literal: true

module Gitlab
  module Kubernetes
    module Helm
      HELM_VERSION = '2.16.6'
      KUBECTL_VERSION = '1.13.12'
      NAMESPACE = 'gitlab-managed-apps'
      NAMESPACE_LABELS = { 'app.gitlab.com/managed_by' => :gitlab }.freeze
      SERVICE_ACCOUNT = 'tiller'
      CLUSTER_ROLE_BINDING = 'tiller-admin'
      CLUSTER_ROLE = 'cluster-admin'
    end
  end
end
 

 

Adjusted file:

# frozen_string_literal: true

module Gitlab
  module Kubernetes
    module Helm
      HELM_VERSION = '2.16.6'
      KUBECTL_VERSION = '1.13.12'
      HTTP_PROXY= 'http://<proxy-ip>:3128'
      HTTPS_PROXY= 'http://<proxy-ip>:3128'
      NO_PROXY= "172.19.0.0/16, 172.20.0.0/16, <my-network-ip-range>/16, localhost, 127.0.0.1"
 
    NAMESPACE = 'gitlab-managed-apps'
      NAMESPACE_LABELS = { 'app.gitlab.com/managed_by' => :gitlab }.freeze
      SERVICE_ACCOUNT = 'tiller'
      CLUSTER_ROLE_BINDING = 'tiller-admin'
      CLUSTER_ROLE = 'cluster-admin'
    end
  end
end
 

 

for the NO_PROXY, the 1st and 2nd ip-range are the internal k8s network range, it’s really important to keep the space after the comma, without this space it’s doesn’t work. I worked hard to identify this space.

 

Second file located at /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/kubernetes/helm/pod.rd

Original part of the file that need to be adjusted:

def generate_pod_env(command)
          {
            HELM_VERSION: Gitlab::Kubernetes::Helm::HELM_VERSION,
            TILLER_NAMESPACE: namespace_name,
            COMMAND_SCRIPT: command.generate_script
          }.map { |key, value| { name: key, value: value } }
        end

 

Adjusted part of the file:

def generate_pod_env(command)
          {
            HELM_VERSION: Gitlab::Kubernetes::Helm::HELM_VERSION,
            HTTP_PROXY: Gitlab::Kubernetes::Helm::HTTP_PROXY,
            HTTPS_PROXY: Gitlab::Kubernetes::Helm::HTTPS_PROXY,
            NO_PROXY: Gitlab::Kubernetes::Helm::NO_PROXY,

            TILLER_NAMESPACE: namespace_name,
            COMMAND_SCRIPT: command.generate_script
          }.map { |key, value| { name: key, value: value } }
        end

 

With this adjustments I can deploy  the helm tiller and all other applications with success.

Now I’m testing the deploy.

Anibal

Badge +3

ok pure gitlab hacking, pushing env var in each function used by gitlab to deploy things

good job @Anibal Ulisses 

Userlevel 1
Badge +2

thank you @crisj 

code hacking ;-)

I think that this topic can be closed