My day at the AWS CSAA (Released February 2018) certification exam

Finally I sat for the exam

After preparing for almost two months, I mustered enough confidence and enrolled for the exam.  Booked the 8 AM slot on Tuesday April 17th. Wrote and passed the exam and waiting for the final certificate.

Before the exam

Do some yoga/meditation/deep breathing or something similar of your choice. This helps you relax, concentrate and refresh your memory. Go early if they are open. This way you can start early exam. No need to wait till the time on your schedule. You can start early and finish early if the terminals are available to use. Arrived at 7:30 AM and started my test 5 minutes earlier.

After arriving at the test center

My exam center was in Kondapur, Hyderabad, INDIA on a busy commercial street with lots of vehicular traffic. As soon I got in  the receptionist greeted me and I was seated. Few minutes later I was asked for IDs. I had laminated copies but they wanted originals. Told them I laminated the originals which he accepted. He also looked at my credit cards/PAN/Aadhaar cards.

They asked me sign on a sheet of paper, asked me to empty everything, watch, keys, coins, wallet, phone, hand kerchief into the locker and locked it and I kept the key. Oh, he even checked my eye glasses and approved they are acceptable. Very thorough process. I was impressed and actually thought about the AWS security and trying to compare with it 🙂

Exam room was at the corner, a separate isolated room with five chairs and terminals. Very clean with good lighting and AC. AC was little cooler so I asked them to increase the temperature which they kindly complied. They had a small scratch board available for scribbling.

The 15 inch LCD monitor was bright enough but I complained that it was dirty, so they came and tried to wipe it but of no use. They said they were marks from scratch pad marker which I think was the case.

The room was supposed to be sound proof but as soon as I started the exam, I got distracted by someone talking on the phone loudly. In here, people are on the louder side and on top of it the cellphone signals are week. So you can’t really blame them with all the ambient noise from traffic/cars honking etc. one has to talk loud. I complained and they quickly gave me ear plugs which did 70% of the job but I could still hear some times. This is where your meditation skills help I think 🙂

Data connectivity: It was bad where I gave my exam. There was a one or two second latency when you click on the answer before it says “Communicating” and go to the next question. Also two times during the exam, the data connectivity was totally lost. I had to call them in and they had to exit and re-login. When the data connection was restored, the clock started from where it stopped and took me to the last position.  All my answers were saved intact to the server so I can continue from where I left. Ideally you should use interruption this to take your break so there won’t be any lost time. When you go to restroom and come back, they will empty your pockets to check you again and make you sign on the sheet.

The Exam

English grammar, keywords, singulars/plurals and tenses

My mother tongue is not English but I consider myself very proficient in English. The exam I think is skewed towards people proficient in languages (English in this case) and grammar. Many questions were very subtle. You need to read every keyword in the question and understand the significance of the keyword to the overall meaning of the question and to the choice of the correct answer. Keywords are either technical (example: indexed data) or plain English language such as “customer is willing to move”.  Grammatical importance is high. For example you need to understand the difference between singular (eg. EC2) and plurals (eg. EC2s).

Even the answers had many subtle keywords that you must pay attention to. These keywords sometimes directly map to one of the five pillars of the well architected design framework and sometimes to the very meaning of the context. Its very easy to make silly mistakes even though you know the concept and are an AWS guru, if you don’t pay attention to the keywords. I changed my answers to atleast half a dozen questions after I finished and went back to review. The subtleties were really clear once you re-visit the question and the corresponding answer the second or third time.

Difficulty level of my exam

This exam was probably the most difficult one (barring the A Cloud Guru final exam :-)) I ever attempted. Many Whizlabs tests (2018 February version) I took, I got all most all correct but its extremely difficult to score like that here.

Many difficult questions appeared in the first quarter of the test which makes you feel you will never be able to finish it on time. But once I flagged the difficult ones and moved on, the questions got simpler and straight forward with single one-word answers. Stayed at this level for a while, till I reached question 45 or so then they got difficult again and continued till the end. I have to say the technical difficulty is 8 or 9 out of 10 and there is this language difficulty which I’d rate at 8 out of 10. In my opinion its good to make it difficult and keep the curve up and maintain the value of this certification. Hats off to Amazon for creating such a wonderful and enjoyable yet extremely challenging question bank.

How I used my time after the clock started

  • Total time given was 130 min for 65  AWS questions
  • Another 5  feedback and rating questions were given at the end after completion of the exam with additional time
  • Spent around 80 min to finish all 65 questions with 15 flagged for review.
  • Went back to those 15 and spent 30 minutes and changed answers on 4 or 5 of them
  • I was left with 20 min and went back to all the unflagged questions starting from question number one. Was able to review till question number 48 or so before time was up. Glad I did this exercise since I changed the answers on atleast 3 items
  • Lost data connection two times and they had to exit and re-login and it will start from where you lost the internet.

Some of the questions that appeared in my exam

To comply with Amazon’s terms and policies I am only giving you the summaries and types of questions appeared in my exam, grouped into topics. I highlighted the keywords in bold. Most of the questions were directly referring to one of the five pillars of Well Architected Framework. You need to match the keywords in the question to one of the pillars then the answer will be obvious.

  1.  EBS
    1. If my App needs large sequential I/O what to use?
      1. Answer: use Throughput Optimized HDD (st1).
      2. Pillars: Cost effective Pillar and  High performant architecture pillar
    2. If my App needs write intensive, small, random I/O  how to design to improve i/o performance?
      1. Answer:  RAID 0 stripe multiple EBS volumes together for high I/O performance was the best available choices since there was no choice given for Provisioned IOPS volumes.
      2. Pillars: Cost effective Pillar and  High performant architecture pillar
    3. Your on premises server has application using proprietary file system. How do you migrate to AWS?
      1. Answer: Use EBS volumes with EC2. Other choices included EFS, Stored Volumes etc. Keyword is proprietary file system since EFS supports NFS and stored volumes support iSCSI.
      2. Pillar: Operational Excellence Pillar
  2. CloudWatch
    1. EC2 calls API on behalf of users (did not mention what API or which users. Assume AWS API and IAM users). How to monitor when API calls 1) reach >5 per second 2) >3000/hour 3)Count of API calls by user
      1. Two Multiple Answers: Enable CloudTrail and Use Custom CloudWatch metrics to monitor API calls. Wrong choices included CW Metrics (custom keyword missing), CW Logs etc.
      2. Pillar: Operational Excellence
  3. CloudTrail
    1. Question about Enabling CloudTrail automatically for future regions using turning on CloudTrail for All Regions
  4. CloudFront
    1. EC2 web server is serving static and dynamic content. Due to this you are getting high CPU utilization. How to reduce the load?
      1. Answer: Use cloudfront with EC2 origin. Wrong choices included not so obvious ones such as cloudfront with url signing, elasticache
      2. Pillar: Performance
    2. How to serve private S3 content from CloudFront?
      1. Answer: By using Origin Access Identifier for S3 and URL signing. The keyword here is “private” hence you need to disable public access to the S3 bucket and allow only OAI from CF to access. URL signing is must since you don’t want people to reuse the urls as they expire after sometime.
      2. Pillar: Security, Performance
  5. RDS
    1. On premise db needs to be migrated to AWS. Requirement is redundant data for DR in three Availability Zones. How to achieve?
      1. Answer: AWS RDS Aurora. Since the keyword is 3 AZs. Choices included RDS MySQL with Multi AZ, DynamoDB, RedShift etc.
      2. Pillar: Reliable Pillar
  6. Elasticache
    1. Need to design web session storage for million user web site, what do you recommend?
      1. Answer is elasticache. Other choices that are not good: Redshift, S3, EFS, RDS etc.
      2. Pillars: Performance Efficiency pillar
    2. Many questions came up, where elasticache was present as one of the choices but it was not the correct choice based on the keywords.
      1. For example you need a data storage for key/value and JSON documents with unlimited capacity and highly scalable, where DynamoDb is the correct choice since unlimited capacity and scaling is needed.
  7. Cognito
    1. Need to design and develop quickly a mobile application solution to let users login with MFA. What do you suggest?
      1. Answer is Cognito as it supports ready to use identity management solution and provides MFA thru SMS. Wrong choices included RDS, S3 policies (to bait you for MFA), IAM etc.
      2. Pillars: Security, Cost Optimization
  8. S3
    1. How to access S3 privately from on premises VM’s connected via VPN to AWS. Choices included S3 VPC endpoint via AWS EC2 proxy, IP Whitelisting CGW, IP Whitelisting VPG.
      1. Answer: Not sure. Recommend reading and understanding S3 private connections (by pass internet) from VPNs and VPCs
      2. Pillar: Security
    2. How to access S3 from VPC private subnet?
      1. Answer: VPC endpoint. Other choices included NAT Gateway, Internet Gateway etc
      2. Pillar: Security and Cost pillars
    3. Only CEO needs to access daily reports on S3 which are very confidential. How to provide this?
      1. Answer: S3 presigned URLs. Choices included AWS KMS key encryption, AWS S3 Key encryption, MFA, IAM Roles etc.
      2. Security Pillar
  9. SQS
    1. Mission critical Order processing system to be designed on EC2 using ELB/Auto Scaling. How do you decouple?
      1. Answer: SQS. Wrong choices included SNS, SES etc. but did not include SWF hence SQS is better option among the available ones.
      2. Pillars: Reliability pillar
  10. Lambda
    1. After user uploads an image, EC2 copies it to S3 then another EC2 constantly checks S3 and retrieves image and processes and copies the resultant image to another bucket. What do you recommend to re-design decoupled?
      • Answer: Use Lambda since they mentioned re-design and de-couple. Other choices included SNS, SES etc. but no SQS. So Lambda was the best choice.
      • Pillars: Reliability pillar
    2. EC2 has a script that runs hourly to download new files from S3 and process. How do you improve this for availability?
      1. Answer: Invoke Lambda when a new file is created on S3
      2. Pillar: Reliability and Performance pillars
  11. Kinesis
    1. Car rental agency needs to monitor GPS locations from all cars to make sure they are not driven too far. If they get few thousand data points every hour how to process this data?
      • Answer: Use Kinesis Firehose and store in S3 and analyze. Keyword to understand is every hour, meaning its a data stream coming in 24/7. None of the other choices (EC2, SQS, Lambda) have streaming data features
      • Pillars: Reliability Pillar (Data loss is not acceptable)
  12. Application Load Balancer:
    I got 3 or 4 questions on this topic and relating to the choice of ALB over CLB. Luckily I was reading the same topic on AWS documentation in the morning and I was able to answer correctly.

    1. A set of EC2 instances are running a set of web services using different ports. How do you balance the load?
      1. Answer: ALB. Since multiple web services are running across multiple ports spanning multiple EC2s. CLB can’t distinguish by port and CLB can balance a single service.
      2. Pillars: Reliability Pillar
    2. A three tier web application is using ELB as front end, web tier running on EC2 instances and db tier running on RDS instances. How can you introduce fault tolerance?
      1. Answer: Classic ELB in front of EC2s. The takeaway keyword here is instances (plural). There was one reasonably good looking choice “CLB in front of RDS instances” but then I remembered CLB can only balance web traffic and not RDS traffic.
      2. Pillars: Reliability Pillar
  13. DynamoDB
    1. You have 60 TiB indexed data which is growing exponentially, that you want to move to AWS. Which unlimited durable storage would you recommend?
      1. Answer: DynamoDB since the keyword indexed suggests that this data is indexed and searchable/queryable. Other choices included RDS, S3 etc. Unlimited and durable apply to S3 as well but DynamoDB is better than S3 for indexed data.
      2. Pillar: Reliability Pillar
    2. In house MySQL is unable to perform even with the highest available CPU/Memory configuration. This system reads small 400 kb data items,  one record at a time. Customer willing to move to a new architecture. What do you suggest?
      1. Answer: DynamoDB since keywords “one record” and “small items” is used meaning JOINS are not performed. The language keyword here is “willing to move” meaning they are ok to go from relational to NoSQL
  14. EC2
    1. Which EC2 would you recommend for cost effective servers that you will use for the next three years continuously at the same capacity? A) Regional Standard Reserved Instances B) Regional Convertible Instances C) Standard Reserved Instances
      1. Answer: Please read Jeff’s blog post and understand the minute details. I think I made a mistake here. Did not expect such complicated, in depth question in an associate level exam.
      2. Pillar: Cost effective pillar
    2. IAM role for EC2 question. Simple and straight forward.
    3. What would you recommend to run batch processes every Mon/Thu/Fri from 10 to 12?
      1. Answer: Scheduled EC2
      2. Pillar: Cost effective pillar
  15. VPC
    1. How to access S3 from VPC private subnet?
      1. Answer: VPC endpoint. Other choices included NAT Gateway, Internet Gateway etc
      2. Pillar: Security and Cost pillars
    2. Three tier application with ELB, web servers and db servers need to have no internet connectivity from tier 2, 3
      1. Answer: ELB in public subnet, Web servers in private subnet and db servers in private subnet
      2. Pillar: Security pillar
  16. Bastion Host 2 questions
  17. NAT 4 questions
    1. You have a NAT and EC2 instances in private subnet. How to make is more Reliable?
      1. Answer: Add NAT’s in all AZs
      2. Pillar: Reliable Pillar
    2. Migrating from NAT instance with custom scripts that perform auto scaling. What do you recommend?
      1. Answer: NAT gateway in all AZs
      2. Pillar: Reliable Pillar
    3. Basic question about Private EC2 needing to access internet for patches. How to achieve this?
      1. Answer: Use NAT Gateway. Choices included all gateways such as NAT GW,  IGW, VPG, Customer GW etc.
      2. Pillar: Scalable Performant Architecture
  18. EFS
    1. Your auto scaling group runs 10 to 50 Linux instances that need to have access to common storage which is mountable. What do you recommend?
      1. Answer: Common and mountable keywords imply EFS
  19. Route53
    1. One question relating to Failover policy
  20. ECS
    1. Need to design system running on docker containers that need orchestration.
      1. Answer: ECS
  21. Storage Gateway
    1. Your legacy app using iSCSI needs to have storage solution on AWS for all new storage
      1. Answer: Cached Volumes since new keyword is used meaning all the new data must be stored on AWS. Stored volumes is a wrong choice here since with SV, you store all new data locally with a backup on AWS for redundancy purpose only.

I wish I read before the exam

As part of my two month long preparation, I read the AWS CSAA official study guide (bought kindle edition), subscribed and finished A Cloud Guru videos and sample tests and completed all quizzes in WhizLabs. One thing I noticed today, all the complicated questions had answers in Jeff Barr’s blog. Wish I read all his blog posts before taking the exam. Must read if you are taking February 2018 released exam since this exam targets many newer topics which are not covered in the older exam.


<<< AWS CSAA – Released February 2018 Exam Questions
Copyright 2005-2016 KnowledgeHills. Privacy Policy. Contact .