Getting Start with Migrate for Anthos
Migrate for Anthos provides an almost real-time solution to take an existing VM and make it available as a Kubernetes hosted pod with all the values associated with executing your applications in a Kubernetes cluster.
Create the source Compute Engine
- Run the following command to create and configure a Compute Engine instance that will act as the source of the VM to be migrated:
gcloud compute instances create source-vm --zone=us-central1-a --machine-type=n1-standard-1 --subnet=default --scopes="cloud-platform" --tags=http-server,https-server --image=ubuntu-minimal-1604-xenial-v20200916 --image-project=ubuntu-os-cloud --boot-disk-size=10GB --boot-disk-type=pd-standard --boot-disk-device-name=source-vm \
--metadata startup-script='#! /bin/bash
# Installs apache and a custom homepage
sudo su -
apt-get update
apt-get install -y apache2
cat <<EOF > /var/www/html/index.html
<html><body><h1>Hello World</h1>
<p>This page was created from a simple start up script!</p>
</body></html>
EOF'
You have installed the Apache web server and created a basic web page via the startup script. - Create a firewall rule to allow the HTTP:
gcloud compute firewall-rules create default-allow-http --direction=INGRESS --priority=1000 --network=default --action=ALLOW --rules=tcp:80 --source-ranges=0.0.0.0/0 --target-tags=http-server
- In the Cloud Console navigate to Compute Engine > VM instances and locate the row for the instance you created and copy the External IP address.
- Paste the instance’s IP address to your browser address bar. Prefix it with http://.
- You should now see the “Hello World!” page.
- To migrate the VM, first stop it from running..
- In the Cloud Console navigate to Compute Engine > VM instances, check the checkbox to the left of the
source-vm
then click the STOP button on the top
Confirm the shutdown by clicking Stop in the pop-up window. You can continue to the next section while the VM is shutting down.
Create a processing cluster
In the following steps you’ll create a GKE cluster in the Cloud that you’ll use as a processing cluster. This is where you’ll install Migrate for Anthos and execute the migration.
- In Cloud Shell use the following command to create a new Kubernetes cluster to use as a processing center:
gcloud container clusters create migration-processing --zone=us-central1-a --machine-type n1-standard-4 --image-type ubuntu --num-nodes 3 --enable-stackdriver-kubernetes
Install Migrate for Anthos
To allow Migrate for Anthos to access Container Registery and Cloud Storage you need to create a service account with the storage.admin role.
- In Cloud Shell create the m4a-install service account:
gcloud iam service-accounts create m4a-install \
--project=$PROJECT_ID
- Grant the storage.admin role to the service account:
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:m4a-install@$PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/storage.admin"
- Download the key file for the service account:
gcloud iam service-accounts keys create m4a-install.json \
--iam-account=m4a-install@$PROJECT_ID.iam.gserviceaccount.com \
--project=$PROJECT_ID
- Connect to the cluster:
gcloud container clusters get-credentials migration-processing --zone us-central1-a
- Set up Migrate for Anthos components on your processing cluster by using the migctl command-line tool included with Migrate for Anthos:
migctl setup install --json-key=m4a-install.json
- Validate the Migrate for Anthos installation. Use the migctl doctor command to confirm a successful deployment:
migctl doctor
It may take more than a minute before the command returns the following success result.
Migrating the VM
Now you’ll create a migration plan with migration details, then use it to migrate the VM.
To use Compute Engine as a migration source, you must first create a service account with the compute.viewer and compute.storageAdmin roles:
- In Cloud Shell create the m4a-ce-src service account:
gcloud iam service-accounts create m4a-ce-src \
--project=$PROJECT_ID
- Grant the compute.viewer role to the service account:
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:m4a-ce-src@$PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/compute.viewer"
- Grant the compute.storageAdmin role to the service account:
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:m4a-ce-src@$PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/compute.storageAdmin"
- Download the key file for the service account:
gcloud iam service-accounts keys create m4a-ce-src.json \
--iam-account=m4a-ce-src@$PROJECT_ID.iam.gserviceaccount.com \
--project=$PROJECT_ID
- Create the migration source:
migctl source create ce source-vm --project $PROJECT_ID --json-key=m4a-ce-src.json
Where m4a-ce-src.json specifies the service account you created above.
Create a migration
You begin migrating VMs by creating a migration. This results in a migration plan object.
A migration is the central object with which you perform migration actions, monitor migration activities and status with the migctl tool or in the Cloud Console. The migration object is implemented as a Kubernetes Custom Resource Definition (CRD).
Next you will create a migration by running the migctl
tool.
- Create the migration plan that defines what to migrate:
migctl migration create my-migration --source source-vm --vm-id source-vm --intent Image
- Run the following to check the status:
migctl migration status my-migration
Review the migration plan
- For this lab, you will use the migration default plan. You will now download the migration plan to just to review it:
migctl migration get my-migration
- Open the my-migration.yaml file in your preferred text editor or the Cloud Shell code editor to review.
If you need to make changes, you would upload the new plan with migctl migration update my-migration command.
Migrate the VM using the migration plan
- This command will migrate the VM and generate artifacts you can use to deploy the workload:
migctl migration generate-artifacts my-migration
- After the migration begins, check its status by running the following:
migctl migration status my-migration
- You can add -v flag for verbosity:
migctl migration status my-migration -v
Deploying the migrated workload
- In the following steps you’ll get the deployment artifacts you generated during migration, then use them to deploy your migrated workload to the cluster. As a last step, you’ll confirm that the “Hello World!” web page is available from your migrated app.Once the migration is complete, get the generated YAML artifacts:
migctl migration get-artifacts my-migration
- The command downloads files that were generated during the migration:
- deployment_spec.yaml — The YAML file that configures your workload.
- Dockerfile — The Dockerfile used to build the image for your migrated VM.
- migration.yaml — A copy of the migration plan.
- If the Cloud Shell editor isn’t open already, click the Open Editor button, then Open in new window.
- Open the deployment_spec.yaml file and locate the
Service
object whose name issource-vm
.
Beneath the followingService
definition, add anotherService
at the end that will expose port 80 for access to your web server over HTTP.apiVersion: v1 kind: Service metadata: name: my-service spec: selector: app: source-vm ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancer
- Your file should look like this:
apiVersion: v1 kind: Service metadata: creationTimestamp: null name: source-vm spec: clusterIP: None selector: app: source-vm type: ClusterIP status: loadBalancer: {} --- apiVersion: v1 kind: Service metadata: name: my-service spec: selector: app: source-vm ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancer
Save the file.
- Apply the
deployment_spec.yaml
to deploy the workload:
kubectl apply -f deployment_spec.yaml
- Now check for an external IP address:
kubectl get service
When the web server is ready, you’ll see an external IP address for the my-service you added.
Test the migration
Test the migration by opening a browser and visiting the web page at the external IP address of my-service
(be sure to use HTTP rather than HTTPS).
For example
http://<my-service-external-IP>
Comments are closed, but trackbacks and pingbacks are open.