AWS Solution Architect Associate Exam Cheat Sheet

Materials derived from Practice Exams and majority comes from Udemy course by Stephane Maarek.

Regions / Availability Zones (AZ)

  • Regions : AWS Geographical regions
  • Availability Zones (AZ) : ex (us-west-1)
  • Edge locations : each region consists of many edge location to serve cached data for frequent access to nearby users

IAM: Users & Groups

  • IAM = Identity and Access Management, Global service
  • Root account created by default, shouldn’t be used or shared
  • Users are people within organization and can be grouped
  • Groups only contain users, not other groups
  • Users don’t have to belong to a group, and user can belong to multiple groups

IAM: Permissions

  • Users or Groups can be assigned JSON documents called policies
  • These policies define the permissions of the users
  • In AWS, apply the least privilege principle: dont give more permission than user needs

IAM Policy Structures

  • Consists of
    • Version
    • Id
    • Statement
      • Sid : statement Id
      • Effect : Allow / Deny
      • Principal : account/user/role to which this policy applies to
      • Action : list of action this policy allows or denies
      • Resource : list of resources to which this actions applies to
      • Condition : (optional) conditions for when this policy in effect

IAM Roles : assign permission to AWS service ex : EC2 instance roles, lambda function roles, roles for Cloud formation

IAM Credentials Report : report that list users and status of their credentials

IAM Access Advisor : shows the service permission granted to user and when the service last accessed

EC2 : Elastic Compute Cloud = Infrastructure as a Service

Consists the capability of

  • EC2 : renting visual machine
  • EBS : Storing data on virtual drives
  • ELB : Distribute load across machine
  • ASG : Auto scaling services

EC2 User Data : bootstrap (launching list of commands) instance using EC2 User data script (runs with root user)

EC2 Instance Types

  • General purpose (Tx, Mx) : diversity of workloads such as web servers or code repositories
  • Compute Optimized (Cx) : for compute-intensive tasks that require high performance (batch processing, gaming, high performance computing
  • Memory Optimized (Rx, Xx) : for processing large data sets in memory (business intelligence)
  • Storage Optimized : require high read and write access to large data sets on local storage (high frequency online transaction processing (OLTP) systems)

Security Groups

  • can be attached to multiple instances
  • locked down to a region / VPC combination

EC2 Instances Purchasing Options

  • On-Demand : short workload, predictable pricing (billing per second – linux, other per hours)
  • Reserved (minimum 1 year) : 75% less than on demand, more discount with more reserved period
    • Reserved Instances : long workloads
    • Convertible Reserved Instances : long workloads with flexible instances
    • Scheduled Reserved Instances : (ex : every Thursday between 3-6 PM)
  • Spot Instances : short workloads, cheap, can lose instances
  • Dedicated Hosts : book entire physical server 3 year period, for compliance issue, can use existing server-bound software licenses

Elastic Network Interface (ENI) : logical component in VPC that represent virtual network card

  • Primary private IPv4, one or more secondary IPv4
  • 1 elastic IP per private IP
  • 1 public IP
  • 1 or more security groups
  • 1 mac addres

ENI can be created independently and attached on the fly on EC2 instance for failover.

ENI bound to specific AZ

EC2 Hibernate

  • If EC2 terminated, data on EBS (root) terminated (by default)
  • RAM state is preserved (RAM state is written to a file in root EBS volume)
  • Root EBS must be encrypted

EC2 Nitro

  • better performance (enhanced networking, IPv6)
  • higher speed EBS (max 64K IOPS, non-nitro is 32K IOPS)

EBS (Elastic Block Store) : a network drive that can be attached to EC2 instances while running

  • data is persisted unless EBS is attached as root to EC2
  • mounted to 1 instance at a time
  • bound to 1 AZ
  • EBS snapshot (backup of EBS at a point in time) so can be copied to another AZ

AMI (Amazon Machine Image) : custom EC2 instance

  • must create an AMI by region.
  • must copy AMI to another region to launch instance

Golden AMI : snapshot a state after application installation

EC2 Instance Store : high-performance hardware disk. however, if EC2 stopped, lose storage (ephemeral). good for buffer / cache/ temporary data

EBS volumes comes in 6 types

  • General purpose SSD (gp2, gp3) : only this can be used for boot
  • Highest performance SSD (io1, io2) : high performance for high throughput workloads
  • Low cost HDD (st1) : throughput intensive, frequent access)
  • Lowest cost HDD (sc1) : infrequent access

EBS Multi-Attach – io1 / io2 : can be attached to multiple EC2 instance in the same AZ

EBS Encryption

  • Data at rest encrypted inside the volume
  • Data inflight moving between instance and volume
  • all snapshot encrypted

EFS (Elastic File System)

  • managed NFS (network file system) can be mounted to many EC2
  • multi AZ
  • expensive (3x gp2), pay per use
  • encryption at rest using KMS
  • not compatible with Windows

RAID 0 : data shared multiple volume, RAID 1 : data duplicated multiple volume

High Availability & Scalability for EC2

  • Vertical Scaling : Increase instance size
  • Horizontal Scaling : Increase no of instance
  • High Availability : run instance for same application across multi AZ

ELB (Elastic Load Balancing) : managed load balancer

  • Classic Load Balancer (CLB) : HTTP, HTTPS, TCP, SSL
    • Health check TCP or HTTP based
    • Fixed hostname : ex : xxx.region.elb.amazonaws.com
  • Application Load Balancer (ALB) : HTTP, HTTPS, WebSocket
    • balancing to multiple HTTP applications across target groups
    • health check target groups level
    • balancing to multiple apps on same machine
    • support redirect : ex : HTTP to HTTPS
    • routing table based on path, hostname, query string, headers
    • target groups
      • EC2 instance, ECS, lambda, IP address
  • Network Load Balancer (NLB) : TCP, TLS, UDP
    • forward TCP & UDP to instance
    • handle million of request per second
    • less latency, extreme performance
    • support elastic IP (for whitelisting specific IP)
    • 1 static IP per AZ
  • Gateway Load Balancer (GWLB) : Network Layer, IP Protocol

ELB Sticky session (for CLB and ALB) redirect same user to same instance behind load balancer

ELB support Cross-Zone Load Balancing : each load balancer instance distributed evenly across all registered instance in all AZ

ELB support SSL certificates

ELB (ALB, NLB) support SNI (Server Name Indication) which is loading multiple SSL certificates onto 1 web server

ASG (Auto Scaling Group) has the following attributes :

  • Launch configuration
    • AMI + instance type
    • EC2 user data
    • EBS volumes
    • security groups
    • SSH keypair
  • Min / Max size, initial capacity
  • Network + subnet information
  • Load balancer information
  • Scaling policies

ASG cab auto scale based on custom metric using CloudWatch

ASG scaling policy can be CPU, Network,

To update ASG, must provide a new launch configuration / launch template

IAM roles attached to ASG will get assigned to EC2 instances

ASG Scaling Policy

  • Predictive Scaling : forecast load and schedule scaling ahead

Dynamic scaling : target tracking (ex, keep average CPU at 40%, etc), step scaling, simple scaling (specify max and min threshold, but in simple, need to wait cooldown)

ASG Scaling cooldown (300 second default)

ASG termination policy : balance across AZ and delete based on age of launch configuration

AWS RDS (Relational Database Service)

  • managed DB service
  • PMMOMA (Postgre, MySQL, MariaDB, Oracle, Microsoft SQL, Aurora)
  • automated provisioning, OS patch, monitoring dashboard
  • continuous backups and restore
  • deployed in private subnet
  • RDS events only provide operational events such as DB instance events, DB parameter group events, DB security group events, and DB snapshot events
  • enable enhanced monitoring for memory and CPU
  • IAM based authentication MySQL and PostgreSQL
    • use auth token through IAM & RDS API calls (lifetime 15 mins)

RDS Read Replicas

  • Up to 5 Read Replicas
  • within AZ, cross AZ or cross region
  • replication is async so eventually consistent
  • replica can be promoted to main DB
  • app must update connection string
  • network cost if different region. different AZ is free

RDS Multi AZ (Disaster Recovery)

  • SYNC replication
  • 1 DNS name, automatic failover (no need to change connection string)
  • high availability, not for scaling

RDS Security – Encryption

  • At rest encryption (possible to encrypt master & read replica with AWS KMS)
    • has to be defined during launch
    • TDE (Transparent Data Encryption) available for Oracle and SQL Server
  • In-flight encryption (SSL)

Amazon Aurora

  • not open source
  • support postgre & MySQL
  • AWS cloud optimized
  • storage grows increment of 10GB up to 64 TB
  • can have 15 replicas (MySQL only has 5)
  • instant failover (HA native)
  • 1 instance takes writes (master)
  • able to authenticate using IAM token

Aurora DB Cluster (writer endpoint point to master, reader endpoint to connection load balancing, and auto scaling)

Aurora – Custom Endpoint : can define subset of Aurora as custom endpoint to run ex : analytical queries

Aurora Serverless : automatic db instantiation and auto scaling based on actual usage

Aurora Multi-Master : immediate failover for High Availability (HA). every node does Read and Write instead of promoting Read Replica as new master

Global Aurora

  • Aurora Cross Region Read Replica (for disaster recovery)
  • Aurora Global Database
    • 1 primary region (R/W)
    • up to 5 secondary (read only) region
    • promote another region as master RTO < 1 minute

Aurora Machine Learning : enable to add ML-based prediction

  • supported : Amazon SageMaker, Amazon Comprehend

Amazon ElastiCache : managed Redis or Memcached

  • Redis : multi AZ with auto-failover, read replicas, data durability using aof persistence, backup restore
  • Memcached : multi-node for partitioning of data (sharding), no high availability, non persistent, no backup restore

ElastiCache – Redis : has redis auth, guarantee uniqueness & element ordering, ex: gaming leaderboard

Amazon Route 53 : authoritative DNS (customer can update DNS record). also domain registrar, can also check resources health

High TTL records possibly outdated records but less traffic to Route 53 (less bill)

CNAME vs Alias : AWS Resources expose an AWS hostname

  • CNAME : point a hostname to any other hostname
    • Only for non root domain (ex : app.mydomain.com –> something.anything.com)
  • Alias : point to hostname of AWS Resource
    • for root and non root (ex : mydomain.com –> something.amazonaws.com)
    • free

Route 53 Routing Policy

  • Simple Routing : can return multiple value (but 1 is randomly selected by user), no health check
  • Weighted : control % of request that go to each resource
    • can be health check
  • Latency : redirect to resource that has least latency to user based on traffic between user and AWS region
    • can be health check
  • Failover : active – passive
  • Geolocation : based on user location, specify based on continent, country, state
  • Geoproximity : based on user location and resources. also use bias to override decision taken by route 53
  • Multivalue : route to multiple value. same with simple but have health check

Amazon S3 – Buckets

  • globally unique name
  • region level
  • multi part upload for file > 100MB
  • by default all bucket private

S3 Versioning

  • enabled at bucket level, cannot be disabled again

S3 Encryption : 4 methods

  • SSE-S3 : encrypt S3 object using key handled & managed by AWS
  • SSE-KMS : use AWS Key Management Service to manage encryption keys
  • SSE-C : manage own encryption key
  • Client Side Encryption

S3 Security & Bucket Policy

  • User based : IAM policies. can access if allowed
  • Resource based
    • Object Access Control List (ACL)
    • Bucket Access Control List (ACL)
  • Explicit deny take precedence

MFA Delete : force user to input Multi Factor Authentication to do important job in S3 (ex: delete, suspend versioning)

S3 Force encryption : use bucket policy to refuse API to put without encryption header or use default encryption

S3 Replication : Cross Region Replication and Same Region Replication

  • must enable versioning in source and destination
  • buckets can be different account
  • asynchronous
  • no replication chaining. if bucket 1 replicate to bucket 2, file not replicate to bucket 3

S3 pre-signed URL : valid for default 3600 second. user given pre-signed URL has same permission with user who generate pre-signed URL

S3 Storage Classes

  • S3 Standard – General Purpose : Most costly alternative with 99.99% availability (probability that the object will be available when needed) and 99.999999999% durability (probability that data will not be lost)
  • S3 Standard – Infrequent Access (IA) : This still replicates data to different AZ in a region but less costlier than standard. Retrieval fees are charged
  • One-Zone – Infrequent Access : Data stored in single AZ
  • S3 Intelligent Tiering : Storage tiering is managed by AWS instead which uses Machine learning to decide on which tier to use for the bucket based on historical patterns of usage
  • Glacier : Used when cost has to be less but time of retrieval can be configurable ranging from minutes to hours
  • Glacier Deep Archive : This is cheapest option where retrieval time of more than 12 hours is acceptable

S3 Lifecycle Rules : Can use S3 Analytics to determine when to transition Standard to Standard IA

  • LRs can be used to automatically expire objects based on prefix/tag filters (For eg. All objects having tags “abc” should expire after 30 days)
  • Objects can automatically transition across different storage tiers’ based on lifecycle rules. For eg. after 30 days migrate objects to IA-1Zone and after 60 days move it to Glacier and finally expire them after 120 days

S3 Performance

  • Multipart-upload : recommended for files > 100MB, must use for > 5GB
  • S3 Transfer Acceleration : transfer file to AWS edge location to be forwarded to S3 target region
  • S3 Byte-range fetches : parallelize get by requesting specific byte range
    • for faster download
    • for getting header file only

S3 Select & Glacier Select

  • Retrieve less data using SQL by performing server side filtering

S3 Event Notification : chain event to other AWS service, ex : generate thumbnail for every upload

S3 Requester Pay : requester pay the cost of networking

AWS Athena

  • serverless to perform analytics directly to S3
  • charge per query and amount of data scanned

S3 Object Lock : block deletion or overwrite for specific amount of time

S3 Inventory — Reporting for auditing purposes of all S3 objects. Reports can be stored in json, yaml or parquet format

AWS Cloudfront : Content Delivery Network (CDN)

  • content cached at edge
  • DDoS protection, integration with AWS Shield, AWS Web Application Firewall
  • Cloudfront Geo restriction : Blacklist & Whitelist
  • support Lambda@Edge that lets run Lambda closer to user

Cloudfront origin : S3 Bucket, Custom Origin (EC2, ALB, etc)

Cloudfront vs S3 Cross Region Replication

  • Cloudfront : Global Edge, File cached at TTL, great for static content
  • S3 Cross Region : setup each region that want to be replicated, dynamic content

Cloudfront Signed URL :for distributing premium files. (access to single file or URL not necessarily S3) & Signed Cookie (multiple file)
Cloudfront : Price class all (all region), 200 (most region, exclude expensive), 100 (least expensive only)

Cloudfront Multiple Origin : route to different kind of origin (ex: ALB, S3) (based on path)

Cloudfront Origin Groups : can assign primary origin and secondary origin

Cloudfront Field Level Encryption : asymmetric, sensitive information encrypted at the edge close to user

AWS Global Accelerator :

  • leverage AWS internal network, proxy packet at the edge to application inside AWS
  • only 2 external IP to be whitelisted, health check

AWS Snow Family : AWS OpsHub to manage snow family device

  • Data migration : Snowcone (8TB), Snowball Edge (80TB), Snowmobile (<100PB)
  • Edge computing : Snowcone, Snowball Edge (compute or storage optimized)
    • preprocess data
    • machine learning
    • transcoding media streams

AWS Storage Gateway : on-premise data on the cloud

  • File Gateway : File access (NFS) with active directory (backed by S3)
  • Volume Gateway : Volume/ Block Storage / iSCSI (backed by EBS)
  • Tape Gateway : VTL Tape solution (backed by S3 and Glacier)
  • No on-prem virtualization : Hardware appliances

Amazon FSx is a file system offering from AWS. It is offered in two variants:

  • FSx for Windows
  • FSx for Lustre (High performance compute)

FSx is basically a high performance file system that can be used for compute intensive workloads offering high data throughput. Users can additionally configure the throughput irrespective of the data storage size of the file system (unlike EFS)

FSx is frequently used as file storage for Windows systems as it offers SMB protocol support. Additionally, it also offers integrations with other storage services like S3, where data can be temporarily copied from S3 to AWS FSx for high throughput needs from a filesystem perspective; and later the result can be copied back to S3 after the computations are completed.

Payment model is pay-as-you-go

AWS Transfer Family : managed service for file transfer into and out of EFS, S3 using FTP

Amazon SQS (Simple Queue Service)

  • Producing : produced to SQS using SDK
    • persisted in SQS until deleted
    • message retention : default 4 days, max 14 days
  • Consumer : poll SQS (receive up to 10 messages at a time)

SQS Message Visibility

  • when processed, invisible to other consumer
  • default 30 seconds to be produced (visibility)

SQS Dead Letter Queues

  • if fail to process, back to queue. if exceed maximum return, go to Dead Letter Queues

SQS Delay Queues : can set delay before consumer can access the queue

SQS Queue Types

  • Standard SQS Queue : *This is the standard processing model for SQS service
  • FIFO SQS Queue : *In this messages are delivered only once and also arrive in order. Maximum throughput of 300 transactions is supported

Amazon SNS (Simple Notification Service)

  • event producer send message to 1 topic
  • as many subscriber can listen (ex: SQS, lambda, etc)
  • data not persisted
  • This is push based service in contrast to SQS which is pull based
  • In order to ensure that updates are not lost, SNS messages are replicated across all AZs
  • It is immediate notification service with no delays

Streams — Analytics — Firehose

  • When there is a need to consume lots of streaming data on the fly, Kinesis platform can be used: for big-data, analytics and ETL. ordering at shard level

Kinesis offers three different types of services:

  • Kinesis Streams : They work on shards (Shards are containers which define the logical boundaries of data storage). Streams persist the data for minimum 24 hours and maximum 7 days, so that something like Lambda or EC2 can work on this data and understand it.
    • Each shard allows for 1 MB/s incoming data and 2 MB/s outgoing data.
  • Kinesis Firehose : This is without persistence — As soon as data comes in, it has to be read/understood/processed by something like Lambda / EC2 and later the result can be stored in DynamoDB, RDS, S3 or Elastic Cluster etc.
  • Kinesis Analytics : This is used for real time analytics over the data that is pushed to the Kinesis Platform

Amazon MQ (managed apache ActiveMQ) for legacy MQ

Amazon ECS (Elastic Container Service) : launch docker container on AWS, however EC2 instance must be provisioned by ourselves, policy is using IAM

Amazon Fargate : launch docker container but no EC2 to manage. use ENI

Load balancing for ECS : allow EC2 instance any port because it is randomly assigned

Load balancing for fargate : 1 task 1 IP. allow ENI security group task port from ALB security group

ECS Rolling update : when upgrade from v1 to v2, we can control minimum health percent and max percent.

Amazon ECR (Elastic Container Registry) : store, manage and deploy container on AWS, support image vulnerability scanning, version, lifecycle

Amazon EKS (Elastic Kubernates Service) : managed kubernates on the cloud.

Serverless AWS

  • Lambda
  • DynamoDB
  • AWS Cognito
  • AWS API Gateway
  • S3
  • SNS & SQS
  • Kinesis Data Firehose
  • Aurora Serverless
  • Step Function
  • Fargate

AWS Lambda

  • Virtual function, on-demand, short time execution, automated scaling
  • pay per call, pay per duration
  • Execution : memory allocation (128 – 10GB), max exec time (15 mins), disk capacity (512 MB), env variables (4KB), max deployment 250MB size

DynamoDB

  • replication across 3 AZ
  • NoSQL
  • max item size 400KB
  • made of tables, primary key, attributes
  • must have provisioned read and write capacity unit
  • Read Capacity Unit (RCU) and Write Capacity Unit (WCU)
  • can set autoscaling and has burst credit. if burst credit emply ProvisionedThroughputException
  • has Transaction (all or nothing)
  • has also on-demand (no provisioning)
  • support Global Tables (cross region replication)

DynamoDB supports DAX, to cache responses and improve time from milliseconds to microseconds

DynamoDB Streams : changes in DynamoDB (Create, update, delete) can end up in DynamoDB Streams, implement cross region replication using DynamoDB Streams

AWS API Gateway

  • Support WebSocket Protocol
  • Handle API versioning, handle different environments
  • Handle security (Authentication & Authorization)
  • Swagger / Open API import
  • API key, request throttling
  • Generate SDK and API spec
  • Transform & validate request & response
  • Leverages “Sig V4” capability where IAM credential are in headers
  • Use IAM policy authorization and attach to User / Role
  • if static data, use CloudFront

API Gateway – Endpoint Types

  • Edge-Optimized (default) : for global clients, API request routed to nearest CloudFront edge location
  • Regional : client with same region
  • private : from own VPC

AWS Cognito : give user identity to interact with application

  • Cognito User Pools : sign in functionality for app users, integrate with API gateway
    • simple login : user pass, serverless database of users
    • can enable federated identity (Facebook, Google, Saml)
    • send back JWT
  • Cognito Identity Pools (Federated Identity) : provide AWS credentials to user so they can access AWS resources directly, integrate with Cognito User Pools
    • provide direct access to AWS resource from Client Side
    • get temporary AWS credential from Federated Identity Pool
    • credentials come with predefined IAM policy
  • Cognito Sync : synchronize data from device to Cognito, may be deprecated and replaced by AppSync

AWS SAM : Serverless Application Model

  • Framework for developing & deploying serverless applications
  • All configuration is YAML code

Database Types

  • RDBMS (SQL, OLTP): RDS, Aurora (great for joins)
  • NoSQL : Dynamo DB, ElastiCache, Neptune (no joins, no SQL)
  • Data Warehouse (Analytics, BI): Redshift (OLAP), Athena
  • Search : ElasticSearch
  • Graphs : Neptune (display relationship between data)

Redshift

  • based on PostgreSQL but not for OLTP
  • used for OLAP online analytical processing (analytics & data warehousing)
  • BI tools such as AWS Quicksight or Tableau integrate with it
  • data loaded from S3, DynamoDB, DMS, other DB
  • Redshift Spectrum : query directly against S3
  • Redshift : Analytics / BI / Data Warehouse

AWS Glue : managed ETL (Extract, Transform, Load) service, prepare and transform data for analytics

Neptune : fully managed graph database, high relationship data, social networking, knowledge graphs (wikipedia)

ElasticSearch : search by any field

AWS CloudWatch Metrics

  • provides metrics for every service in AWS
  • dimension is an attribute of a metric
  • up to 10 dimensions per metric
  • custom metrics : ex RAM usage, disk space, no of logged in users
  • standard : 5 mins, detailed : 1 min, , high resolution : 10 sec

AWS CloudWatch Logs : applications can send logs to CloudWatch using SDK, can use filter expressions

AWS CloudWatch Logs Agent : by default, no log from EC2 go to CloudWatch, need CloudWatch agent

  • CloudWatch Logs Agent : old version, can only send to CloudWatch Logs
  • CloudWatch Unified Agent : collect additional system-level metrics such as : RAM, processes, etc

CloudWatch Alarms : trigger notification for any metric, states : OK, INSUFFICIENT_DATA, ALARM

  • Stop, Terminate, Reboot or Recover an EC2 Instance, recover with exact same IP, instance info, etc
  • Trigger Auto Scaling Action
  • Send notif to SNS

CloudWatch Events : Intercept events from AWS services (ex : EC2 instance start, S3, CodeBuild)

  • can create schedule or Cron : ex: create an event every 4 hours
  • JSON payload created from the event and passed to a target
  • Standard CloudWatch every 5 minute, Detailed every 1 minute

Amazon EventBridge : next evolution of CloudWatch Events

  • Default event bus : generated by AWS services
  • Partner event bus : receive events from SaaS service or applications
  • Custom event bus : for own application

Amazon EventBridge Schema Registry : can analyze the events in bus and infer the schema, allow to generate code that will know in advance how data is structured in the event bus

AWS CloudTrail : provides governance, compliance and audit for AWS Account, enabled by default

  • CloudTrail is enabled by default
  • Management Event : operation performed on resources in AWS account
  • Data Event : by default data event is not logged

CloudTrail Insight : enable CloudTrail Insights to detect unusual training. analyze normal event to create baseline

AWS Config : help with auditing and recording compliance of AWS resources. helps record configuration and changes over time

view compliance of a resource over time (ex: all security group should block port 80. so view comply/non-compliant item)

AWS Config Remediations, remediate noncompliant service, AWS Config Notification : notify when someone modified something in AWS Config

CloudWatch vs CloudTrail vs Config

  • CloudWatch : performance monitoring & dashboard
    • event & alert
    • log aggregation & analysis
  • CloudTrail : record API call made within account by everyone
    • can define trails for specific resources
    • global service
  • Config : record configuration changes
    • evaluate resources against compliance rules
    • get timeline of changes and compliance

Amazon STS (Security Token Service) : allow to grant limited and temporary access to AWS resources

  • token valid up to 1 hour
  • AssumeRole, AssumeRoleWithSAML, AssumeRoleWithWebIdentity (use Cognito instead)

Use STS to Assume a Role

  • Define IAM Role within account or cross account
  • Define which principal can access the IAM role
  • Use STS to retrieve credentials and impersonate the IAM Role
  • valid 15 min – 1 hour

Identity Federation in AWS : let user outside AWS assume temporary role for accessing AWS resources

  • Use SAML, Amazon Cognito, SSO, AWS Microsoft AD
  • no need to create IAM users

AWS Cognito : login to federated id provider, get temporary AWS credential from Federated Identity Pool, credential come with pre-defined IAM policy

AWS Directory Services : AWS Managed AD

  • AWS Managed Microsoft AD : on-prem AD trust AWS Managed AD
  • AD Connector : Directory Gateway proxy request to on-prem AD
  • Simple AD : AD-compatible managed directory on AWS. cannot be joined with on-prem AD

AWS Organization : global service, manage multiple AWS account. main account is master

  • consolidated billing. member account can only be part of 1 organization
  • pricing benefit

Multi Account Strategies : create accounts per department, per cost center, per dev/test/prod based on regulatory restriction for better resource isolation

Service Control Policies (SCP)

  • whitelist / blacklist IAM actions
  • applied at OU or account level
  • not apply to master account
  • SCP must have explicit allow
  • Hierarcy : Master –> OU –> Account level

When assume a role, you give up your original permission and take the permission assigned to the role

When using a resource based policy, principal doesn’t have to give up permissions

Deny evaluation –> organization scp –> resource based policy –> IAM permission boundary –> session policies –> identity based policies

AWS Resource Access Manager (RAM)

  • Share AWS resource that you own with other AWS accounts
  • share with any account or within organization
  • License Manager Configurations
  • VPC subnets (allow to have all resources in the same subnet)

AWS SSO (Single Sign-On)

  • Centrally managed Single Sign-On to access multiple accounts and third party business application
  • Integrated with AWS organization
  • SAML markup
  • integration with on-prem Active Directory
  • centralized permission and auditing

Amazon KMS (Key Management Service) : AWS managed the key for encryption. fully integrated with IAM

  • symmetric & asymmetric
  • Customer Master Key (CMK) (AWS default managed CMK, user key created in KMS, user key imported)

KMS Automatic Key Rotation : for customer managed CMK. if enabled, happen every 1 year.

KMS Manual Key Rotation : rotation every 90.180 days etc. solution for CMK not eligible for automatic (ex: asymmetric CMK)

KMS Key Policy : control access to KMS CMK

Use Key Alias to hide changes of key for the application. app only know the alias.

SSM Parameter Store

  • secure storage for configuration and secret
  • serverless, scalable, durable
  • version tracking available
  • able to assign TTL to a parameter

AWS Secrets Manager

  • storing secrets
  • capable of force rotation of secret every x days
  • integration with Amazon RDS, encrypted using KMS
  • mostly meant for RDS

CloudHSM : AWS provisions encryption hardware. user manage own encryption keys entirely, must use CloudHSM Client Software

AWS Shield :

  • AWS Shield Standard : free for every AWS customer
  • AWS Shield Advanced : optional DDoS mitigation service , protect against higher fees during usage spikes due to DDoS

AWS WAF (Web Application Firewall)

  • Protect web application from common web exploits HTTP
  • Deploy on ALB, API Gateway, CloudFront
  • Define Web ACL (Web Access Control List)
    • Rules : IP address, HTTP headers, URI
    • protect from SQL injection, XSS

AWS Firewall Manager : manage rules in all accounts of AWS organization, common set of security rules, WAF rules, AWS shield advanced

Amazon GuardDuty : intelligent threat discovery (using machine learning)

  • input data includes (cloudtrail logs, VPC flow logs, DNS logs)
  • can protect against cryptocurrency attacks

AWS Inspector : only for EC2 instance, network assessment, host assestment (common vulnerabilites & exposures). if EC2 instance have vulnerabilites.

Amazon Macie : fully managed data security and privacy service that uses machine learning & pattern matching to discover and protect sensitive data in AWS

AWS Shared Responsibility Model

  • AWS responsibility (security of the cloud) : infrastructure
  • Customer responsibility (security in the cloud) : EC2 instance patches, encryption

CIDR subnet mask

  • /32 = 2^0 = 1
  • /31 = 2^1 = 2
  • /24 : last IP number can change (ex 192.160.0.0/24 –> 192.168.0.0 – 192.168.0.255)
  • /16 : last 2 number can change
  • /8 : last 3
  • /0 : all number can change

Private IP range

  • 10.0.0.0 – 10.255.255.255 (10.0.0.0/8)
  • 172.16.0.0 – 172.31.255.255 (172.16.0.0/12)
  • 192.168.0.0 – 192.168.255.255 (192.168.0.0/16)

Default VPC

  • all new accounts have a default VPC
  • new instance launched into default VPC if no subnet specified
  • default VPC have internet and all instances have public IP

VPC in AWS – IPv4

  • 5 VPC max per region (soft limit)
  • Max CIDR per VPC = 5, min size /28, max size 16
  • Max CIDR : 16
  • VPC is private, only private IP ranges are allowed
  • IP should not overlap with other networks (ex: corporate)

Subnets – IPv4

  • AWS reserves 5 IP (first 4 and last 1) in each subnet.

Internet Gateway

  • 1 VPC attached to 1 IGW
  • created separately from VPC

NAT Instance (Network Address Translation)

  • allows instance in the private subnet to connect to internet
  • must be launched in the public subnet
  • must have elastic IP attached
  • route table must be configured to route traffic from private subnet to NAT instance

NAT Gateway

  • AWS managed NAT, higher bandwidth, better availability no admin
  • requires IGW (Private Subnet –> NAT –> IGW)
  • no security group
  • created in specific AZ, uses Elastic IP
  • highly available not fault tolerant

DNS Resolution in VPC

  • enableDnsSupport : if true, query AWS DNS server
  • enableDnsHostname : true by default for default VPC, if true, assign public hostname to EC2 instance if it has a public hostname

NACL & Security Group : Network Access Control List

  • NACL : firewall which control traffic from and to subnet
  • 1 NACL per subnet
  • Default NACL allow all inbound and outbound
  • ephemeral ports : 1024 – 65535
  • security group : instance level, stateful (return traffic automatic allowed), nacl : subnet level, (return traffic explicitly allowed)

VPC Peering

  • Connect 2 VPC privately using AWS network, behave as if same network
  • must not have overlapping CIDR
  • not transitive
  • Route table must be updated in both VPC that are peered

VPC Endpoints

  • allow to connect to AWS service using private network instead of public
  • remove the need of IGW, NAT, etc to access AWS services
  • Interface –> provision ENI (private address) as an entry point) to most AWS services
  • Gateway –> provision a target and must be used in a route table – S3 and DynamoDB

VPC Flow logs : capture info about IP traffic going into interface, monitor & troubleshoot connectivity issues, capture network info from AWS managed interfaces too. can be queried using Athena

Bastion Hosts

  • use a bastion host to SSH into private instances
  • public subnet which is then connected to other private subnets
  • bastion host security group must be tightened
  • make sure bastion host only has port 22 traffic from the IP needed, not from security groups of other instances

Site to Site VPN

  • Virtual Private Gateway : VPN concentrator on the AWS side of VPN connection
    • VGW is created and attached to the VPC from which you want to create site-to-site VPN connection
  • Customer Gateway : software app/physical device on customer side of the VPN connection

Direct Connect (DX) : provides a dedicated private connection from a remote network to VPC

  • Dedicated Direct support 100 Gbps & 10 Gbps
  • Hosted connection : 50 Mbps, 500 Mbps, 10 Gbps

Egress only Internet Gateway : for IPv6 only, similar function as NAT.

  • IPv6 all public addresses
  • give IPv6 instance access to internet but not directly reachable by internet

AWS Private Link (VPC Endpoint Services)

  • most secure & scalable to expose a service to 1000s of VPC
  • requires NLB (Service VPC) and ENI (Customer VPC)
  • dont want to expose app to internet and other VPC but can be used by other customer

AWS VPN CloudHub

  • Provide secure communication between sites if you have multiple VPN connections
  • Low cost hub-and-spoke model
  • allow to securely communicate with multiple sites using AWS VPN

Transit Gateway : Site to site VPN ECMP (Equal-cost multi-path routing)

  • routing strategy to allow to forward a packet over multiple best path
  • use case : create multiple site-to-site VPN connection to increase bandwidth of connection to AWS

IPv6 in VPC

  • IPv4 cannot be disabled for VPC and subnet
  • get private internal IPv4 and public IPv6

Disaster Recovery

  • RPO : Recovery Point Objective (how far behind data can be loss)
  • RTO : Recovery Time Objective (downtime)

Disaster Recovery Strategies

  • Backup and Restore (High RPO)
  • Pilot Light : small version of app always running on cloud, useful for critical core
  • warm standby : full system up and running but minimum size, scale to production upon disaster
  • multi site / hot site approach : full production scale on AWS and on prem

DMS (Database Migration Service)

  • must create EC2 instance to perform replication
  • source DB remain available during migration
  • quickly and securely migrate database to AWS
  • CDC (Continuous Data Replication)

AWS Schema Conversion Tool (SCT) : convert one engine to another

On Premise strategy with AWS

  • able to download amazon linux 2 AMI as VM
  • VM import / export
  • AWS Application Discovery Service : gather info on on-prem server to plan migration
    • track with AWS migration hub
  • AWS Server Migration Service (SMS) : incremental replication of on-prem live server to AWS

AWS DataSync : move large amount of data from on-prem to AWS S3, EFS, Amazon FSx

AWS Backup : centrally manage and automate backup across AWS services, support cross-account backup

Elastic Fabric Adapter (EFA) : network interface for EC2 instances that enable customer to run applications requiring high level of inter-node communications.

CloudFormation : outline AWS infrastructure (Infrastructure as code)

  • destroy and recreate infrastructure on the cloud on the fly
  • automated generation of diagram
  • declarative programming

CloudFormation – StackSets : create, update delete stacks across multiple accounts and regions with a single operation

AWS Step Functions : serverless visual workflow to orchestrate Lambda functions

AWS SWF (Simple Workflow Service)

  • coordinate work amongst application
  • runs on EC2
  • has human intervention step

Amazon EMR (Elastic MapReduce) : helps creating hadoop clusters (big data) to analyze and process vast amount of data

AWS CodeCommit (GitHub), CodeBuild (Jenkins), AWS CodeDeploy, AWS CodePipeline

AWS Opsworks : chef/puppet –> help perform server configuration

AWS Workspace : Managed, Secure Cloud Desktop, eliminate management of VDI (Virtual Desktop Infrastructure)

AppSync : store and sync data across mobile and web apps

Git Command Cheat Sheet

List all git configuration, to quit press q

> git config -l

Change name and password globally

> git config --global user.name "Daniel Leonardo Niko"
> git config --global user.email "my@email.com"

Make new repository in GitHub (case of existing local project, ex: new project from Spring Initializr)

  • Click “+” icon and select New repository
  • Input repository name, description, Public / Private
  • skip README, gitignore and license
  • Click “Create repository”
  • Go to command prompt and execute the following
> cd existing_project
> git init
> git add .
> git commit -m "First commit"
> git remote add origin https://github.com/your_username/repository_name.git
> git push origin master

Clone repository to local directory

> git clone https://github.com/your_username/repository_name.git

Clone specific branch

> git clone -b <branchname> https://github.com/your_username/repository_name.git

Checkout / switch to remote branch. For example, remote branch created by another developer and we want to pull that branch.

# fetch all remote branches
> git fetch origin
# list all branches available for checkout
> git branch -a
# pull changes from a remote branch
> git checkout -b <branch_name> origin/<branch_name>
# verify branches has been changed
> git branch

Save username and password of Git

# run
> git config --global credential.helper store
# provide username and password below
> git pull

Hands on Beginner Apache Kafka

In this tutorial, we will do a hands-on basics of Apache Kafka with producer and consumer from command prompt while also learn the concept.

For this tutorial, we will need the following tools :

  1. Apache Kafka
  2. Java 1.8

After downloading Apache Kafka, unzip, rename to “kafka” and put the folder in D:\ directory for easy access later on. Open the folder and look for bin folder. This folder contains all the kafka commands that we need, so we better put the bin folder to our Windows Environment Variables PATH for easier execution in command prompt. For Windows environment, move to bin/windows

Next, look for config directory and open zookeeper.properties. In this file, there is a configuration that we need to modify which is the location of data and log folder. So, go up to the kafka directory and make new folder called data and inside the folder, create new folder again named zookeeper. Modify zookeeper.properties to setup data directory.

dataDir=D:/kafka/data/zookeeper

Similarly, inside data folder, create kafka folder to store kafka logs and this time, modify server.properties

log.dirs=D:/kafka/data/kafka

Kafka comes with Zookeeper and it uses Zookeeper to manage Kafka clusters. For more information on relationship between Zookeeper and Kafka, this blog has a good explanation. For now, keep in mind that Kafka cannot function without Zookeeper even if we only have one Kafka server.

Now, lets open command prompt, change directory to D:\kafka and start Zookeeper first with command zookeeper-server-start.bat using zookeeper.properties we modified earlier. Zookeeper runs on port 2181, so make sure no application running on that port.

zookeeper-server-start.bat config\zookeeper.properties

Next, we run Kafka. Open a new command prompt window and start Kafka. Kafka runs in port 9092.

kafka-server-start.bat config\server.properties

Make sure the two command prompt windows are alive during the tutorial.


Now, we are going to test producing message to Kafka and consuming message from Kafka.

First, we need to create topic. We will use “comments” as topic.

kafka-topics --bootstrap-server localhost:9092 --create --topic comments --partitions 3 --replication-factor 1

Using the above command, we create inside our kafka server, a topic named comments with 3 partitions and the topic replicated to 1 broker/server only.

Because we have more than one partition, writing streams of data to our Kafka broker can be done parallelly, in our case, to 3 partitions. So, instead of waiting for one partition to complete saving our data, we can save to other partition parallelly. Technically, more partition meaning faster data can be saved parallelly. However, there are also drawbacks to many partition which explanation will not be covered in this tutorial.

Also, we have replication-factor to 1, meaning, each data is saved/replicated to 1 broker/server. Since we only have 1 running Kafka broker/server, we can only set replication-factor to 1. (Broker = Server. This is exactly the same. However, in Kafka terminology, we often use broker. Hence, i will use broker.)

Open new command prompt window and run the console-consumer. We can use Java Consumer API or other application to consume Kafka data, but in this case, we use console-consumer first.

kafka-console-consumer --bootstrap-server 127.0.0.1:9092 --topic comments

Nothing happens because we have not produce anything yet. So, open new command prompt window and produce a string “hello kafka beginner” to the comments topic.

kafka-console-producer --bootstrap-server localhost:9092 --topic comments
> hello
> kafka
> beginner

Each line after the command is representing a data sent to consumer. So, if we look at the console-consumer window, we will get those 3 lines printed in the console. Therefore, our console-producer successfully produce the data which then consumed by console-consumer.

Now that we see Kafka in action, we can learn in-depth of what we just did in next posts.

Simple CRUD with JDBC, Gradle and JUnit Testing

You can view and download the complete source code of this tutorial from my github account.

In this tutorial, we will create a simple CRUD User Management Console Application using Java, MySQL, JDBC. We will generate and build the project using Gradle and perform unit testing using Junit 5.

For this tutorial, we will need the following tools :

1. Eclipse (i use Spring Tool Suite v4.x)

2. XAMPP or MySQL server

3. Gradle (i use version 5.6.2)


First, lets make a new project called “simpleGradleJunit”.

$> mkdir simpleGradleJunit

$> cd simpleGradleJunit

$> gradle init

Starting a Gradle Daemon (subsequent builds will be faster)

Select type of project to generate:2
   1: basic
   2: application
   3: library
   4: Gradle plugin
 Select implementation language:3
   1: C++
   2: Groovy---> 0% EXECUTING [3s]
   3: Java
   4: Kotlin
   5: Swift
 Select build script DSL:1
   1: Groovy
   2: Kotlin---> 50% EXECUTING [1m 20s]
 Select test framework:4
   1: JUnit 4
   2: TestNG---> 50% EXECUTING [1m 27s]
   3: Spock
   4: JUnit Jupiter
 Project name (default: simpleGradleJunit):
 Source package (default: simpleGradleJunit): id.danielniko.simpleGradleJunit

> Task :init
 Get more help with your project: https://docs.gradle.org/5.6.2/userguide/tutorial_java_projects.html

BUILD SUCCESSFUL in 2m 28s
 2 actionable tasks: 2 executed

Import the resulted directory to your Eclipse workspace using Import –> Existing Gradle Project.

The resulted directory is as follows:

Lets make sure the Gradle generated class file, App and AppTest works well. Right click on the App.java file and select Run As –> Java Application. You will get “Hello World.” in the console window.

Now, run AppTest.java. Right click and select Run As –> JUnit Test. You will encounter the following error :

java.lang.NoClassDefFoundError: org/junit/platform/commons/PreconditionViolationException

If that is the case, edit build.gradle to the following

apply plugin: 'application'

mainClassName = 'id.danielniko.simpleGradleJunit.App'

repositories {
    jcenter()
}

dependencies {
    testImplementation 'org.junit.jupiter:junit-jupiter-api:5.6.0'
    testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.6.0'
}

test {
    // Use junit platform for unit tests
    useJUnitPlatform()
}

Now run the AppTest again as JUnit Test. It will show green bar and no Failures.

This means our settings are complete. Now, its time to finally make our application.


Create our database “budget” and table “users” with the following sql. We also insert 1 row to test when our program can connect successfully.

drop database budget;
create database budget;
use budget;

CREATE TABLE `users` (
  `username` varchar(30) NOT NULL,
  `password` varchar(40) NOT NULL,
  `full_name` varchar(45) NOT NULL,
  `email` varchar(100) DEFAULT NULL,
  PRIMARY KEY (`username`)
) 

Next, we will add MySQL connector to build.gradle. Dont forget to right click your project, go to Gradle –> Refresh Gradle Project when you edit build.gradle. This will trigger gradle to download the dependencies.

dependencies {
    implementation 'mysql:mysql-connector-java:8.0.21'
    testImplementation 'org.junit.jupiter:junit-jupiter-api:5.6.0'
    testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.6.0'
}

Create a new class in id.danielniko.util package and name it DbUtil.java. This class handles the database connection to our MySQL server. In this class, we read a .properties file which contains the information necessary for the connection.

package id.danielniko.util;

import java.io.IOException;
import java.io.InputStream;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.util.Properties;

public class DbUtil {
	
	private static Connection connection = null;
    private static Properties prop = null;

	public static String getProperty(String key) {
		if (prop != null) {
			return prop.getProperty(key);
			
		} else {
			Properties prop = new Properties();
            InputStream inputStream = DbUtil.class.getClassLoader().getResourceAsStream("database.properties");
            try {
				prop.load(inputStream);
				return prop.getProperty(key);
			} catch (IOException e) {
				e.printStackTrace();
				return null;
			}
		}
	}
	
    public static Connection getConnection() {
        if (connection != null)
            return connection;
        else {
            try {
                String driver = getProperty("driver");
                String url = getProperty("url");
                String user = getProperty("user");
                String password = getProperty("password");
                Class.forName(driver);
                connection = DriverManager.getConnection(url, user, password);
                connection.setAutoCommit(false);
            } catch (ClassNotFoundException e) {
                e.printStackTrace();
            } catch (SQLException e) {
                e.printStackTrace();
            } 
            return connection;
        }

    }

}

Create the properties file directly under the src/main/resources folder. Create a new file, name it database.properties. Put the following information inside.

driver=com.mysql.cj.jdbc.Driver
url=jdbc:mysql://localhost:3306/budget
user=root
password=your_password

Now, lets test the .properties file read method and database connection class.

Create new class “DbUtilTest.java” under src/test/java with the same package as class to be tested, id.danielniko.util. In this unit test, we check whether .properties file can be read properly and database connection has been properly configured.

package id.danielniko.util;

import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertNotNull;

import java.sql.Connection;

import org.junit.jupiter.api.Test;

class DbUtilTest {
	
	@Test 
    void getPropertyTest() {
    	String driver = DbUtil.getProperty("driver");
    	assertEquals("com.mysql.cj.jdbc.Driver", driver, "Db driver should match");
    }

    @Test 
    void getConnectionTest() {
    	Connection dbConnection = DbUtil.getConnection();
        assertNotNull(dbConnection, "connection should be successfull.");

    }
}

Right click on the class and Run as JUnit test. Make sure that the test runs successfully.


Now we can concentrate on the business process itself.

Create new POJO (Plain Old Java Object) class named User.java under id.danielniko.model

package id.danielniko.model;

public class User {

	private String username;
	private String password;
	private String fullName;
	private String email;
	
	public String getUsername() {
		return username;
	}
	public void setUsername(String username) {
		this.username = username;
	}
	public String getPassword() {
		return password;
	}
	public void setPassword(String password) {
		this.password = password;
	}
	public String getFullName() {
		return fullName;
	}
	public void setFullName(String fullName) {
		this.fullName = fullName;
	}
	public String getEmail() {
		return email;
	}
	public void setEmail(String email) {
		this.email = email;
	}
	@Override
	public String toString() {
		return "User [username=" + username + ", password=" + password + ", fullName=" + fullName + ", email=" + email
				+ "]";
	}
	
}

Next, we can create DAO (Data Access Object) and name it UserDao.java inside package id.danielniko.dao

package id.danielniko.dao;

import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.ArrayList;
import java.util.List;

import id.danielniko.model.User;
import id.danielniko.util.DbUtil;

public class UserDao {

    private Connection connection;

    public UserDao() {
        connection = DbUtil.getConnection();
    }
    
    public UserDao(Connection conn) {
        connection = conn;
    }

    public void addUser(User user) {
        try {
            PreparedStatement preparedStatement = connection
                    .prepareStatement("insert into users(username,password,full_name,email) values (?, ?, ?, ? )");
            // Parameters start with 1
            preparedStatement.setString(1, user.getUsername());
            preparedStatement.setString(2, user.getPassword());
            preparedStatement.setString(3, user.getFullName());
            preparedStatement.setString(4, user.getEmail());
            preparedStatement.executeUpdate();
            preparedStatement.close();
        } catch (SQLException e) {
            e.printStackTrace();
        } 
    }

    public void deleteUser(String username) {
        try {
            PreparedStatement preparedStatement = connection
                    .prepareStatement("delete from users where username=?");
            // Parameters start with 1
            preparedStatement.setString(1, username);
            preparedStatement.executeUpdate();
            preparedStatement.close();
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }

    public void updateUser(User user) {
        try {
            PreparedStatement preparedStatement = connection
                    .prepareStatement("update users set password=?, full_name=?, email=?" +
                            "where username=?");
            // Parameters start with 1
            preparedStatement.setString(1, user.getPassword());
            preparedStatement.setString(2, user.getFullName());
            preparedStatement.setString(3, user.getEmail());
            preparedStatement.setString(4, user.getUsername());
            preparedStatement.executeUpdate();
            preparedStatement.close();
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }

    public List<User> getAllUsers() {
        List<User> users = new ArrayList<User>();
        try {
            Statement statement = connection.createStatement();
            ResultSet rs = statement.executeQuery("select * from users");
            while (rs.next()) {
                User user = new User();
                user.setUsername(rs.getString("username"));
                user.setPassword(rs.getString("password"));
                user.setFullName(rs.getString("full_name"));
                user.setEmail(rs.getString("email"));
                users.add(user);
            }
        } catch (SQLException e) {
            e.printStackTrace();
        }

        return users;
    }

    public User getUserByUsername(String username) {
        User user = new User();
        try {
            PreparedStatement preparedStatement = connection.
                    prepareStatement("select * from users where username=?");
            preparedStatement.setString(1, username);
            ResultSet rs = preparedStatement.executeQuery();

            if (rs.next()) {
                user.setUsername(rs.getString("username"));
                user.setPassword(rs.getString("password"));
                user.setFullName(rs.getString("full_name"));
                user.setEmail(rs.getString("email"));
            }
        } catch (SQLException e) {
            e.printStackTrace();
        }

        return user;
    }
}

As with previous classes, lets make unit test for this DAO class. We create UserDaoTest.java inside the id.danielniko.dao inside src/test/java. In this class, we use @BeforeAll to initialize before testing that the operation to DB is not auto committed because by default, whenever we call executeUpdate() it will be committed to the DB. We dont want that to happen on our testing.

In this case, the DB operation in test scenario will run in the actual DB but not committed so any changes will be discarded after the test finishes.

package id.danielniko.dao;

import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertNull;

import java.sql.Connection;
import java.sql.SQLException;

import org.junit.jupiter.api.AfterAll;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;

import id.danielniko.model.User;
import id.danielniko.util.DbUtil;

class UserDaoTest {
	
	private static UserDao dao;
	
	@BeforeAll
	static void init() {
		Connection conn = DbUtil.getConnection();
		try {
			// set auto commit false so any operation in this test will be discarded.
			conn.setAutoCommit(false);
		} catch (SQLException e) {
			e.printStackTrace();
		}
		dao = new UserDao(conn);
	}
	
	@AfterAll
	static void teardown() {
		Connection conn = DbUtil.getConnection();
		try {
			conn.setAutoCommit(true);
		} catch (SQLException e) {
			e.printStackTrace();
		}
	}
	
	@Test
	void addUserTest() {
		User user = new User();
		user.setUsername("steven");
		user.setPassword("mypass");
		user.setFullName("Steven Gerrard");
		user.setEmail("steven@example.com");
		dao.addUser(user);
		User userFromDb = dao.getUserByUsername("steven");
		assertEquals("mypass", userFromDb.getPassword(), "Password must be equals");
	}
	
	@Test
	void deleteUserTest() {
		dao.deleteUser("danielniko");
		User userFromDb = dao.getUserByUsername("danielniko");
		assertNull(userFromDb.getUsername(), "Username should be null");
	}
	
	@Test
	void updateUserTest() {
		User user = new User();
		user.setUsername("danielniko");
		user.setPassword("secret");
		user.setFullName("Daniel Niko");
		user.setEmail("danielniko@example.com");
		dao.addUser(user);
		user.setPassword("verysecret");
		dao.updateUser(user);
		User userFromDb = dao.getUserByUsername("danielniko");
		assertEquals("verysecret", userFromDb.getPassword(), "Updated password must be equal.");
	}

}

Once again, right click on the test class and run as JUnit. After all green then we can move on to App.class inside id.danielniko.simpleGradleJunit

package id.danielniko.simpleGradleJunit;

import id.danielniko.dao.UserDao;
import id.danielniko.model.User;

public class App {

    public static void main(String[] args) {
    	UserDao dao = new UserDao();
    	// Add new user
    	User user = new User();
    	user.setUsername("luiz");
    	user.setPassword("secret");
    	user.setFullName("Luiz Suarez");
    	user.setEmail("luiz@example.com");
    	// Update user
    	dao.addUser(user);
    	user.setPassword("verysecret");
    	dao.updateUser(user);
    	System.out.println(dao.getAllUsers());
    	
    }
    
}

Actually, not much is happening in this class since this is only use to show how to connect Java with MySQL with unit testing.

Now, we need to build our program using the following command. Go to project root directory and execute the following.

gradlew build

You will see that it will trigger all of our unit tests to be run also. If you successfully run unit test previously, then it should not be a problem.

Now, if you want to distribute our program as executables, you can run below command.

gradlew distZip

This command will trigger building a simpleGradleJunit.zip file inside the build\distribution folder. Unzip it and run the .bat file inside bin folder using double click or from command prompt. You will get the following :

[User [username=luiz, password=verysecret, fullName=Luiz Suarez, email=luiz@example.com], User [username=steven, password=mypass, fullName=Steven Gerrard, email=steven@example.com]]

Penggunaan Serializable di Java

Kegunaan Serializable

Jika anda menggunakan Java untuk membuat proyek web application, pasti anda akan menjumpai interface Serializable. Serializable digunakan agar object yang dibuat dari class yang implement interface tersebut dapat diubah menjadi byte stream. Tujuan utamanya adalah agar object tersebut dapat disimpan menjadi file atau dikirim melalui network.

Object pada java disimpan dalam heap dalam memory. Jika aplikasinya mati, maka OS akan mengalokasikan ulang memory dan konsekuensinya object akan hilang. Agar object dapat diambil dari heap untuk disimpan ke dalam harddisk atau dikirim melalui network, maka object tersebut harus diubah menjadi byte stream dulu melalui proses Serialization.

Contohnya pada penggunaan HTTP request berikut:

public static void main(String[] args) {

    HttpClient client = new DefaultHttpClient();
    HttpPost post = new HttpPost(“https://www.example.com&#8221;);

    try {
        List<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(1);
        nameValuePairs.add(new BasicNameValuePair(“Email”, “youremail”));
        post.setEntity(new UrlEncodedFormEntity(nameValuePairs));
        HttpResponse response = client.execute(post);
        BufferedReader rd = new BufferedReader(new InputStreamReader(
                response.getEntity().getContent()));
        String line = "";
        while ((line = rd.readLine()) != null) {
            System.out.println(line);
        }
    } catch (IOException e) {
        e.printStackTrace();
    }
}

Pada code snippet di atas, agar object nameValuePair dapat diambil dari heap space dan bisa dikirim sebagai byte melalui internet, maka harus implement Serializable. Jika merujuk pada javadoc, maka benar saja, class tersebut implements Serializable.

serialVersionUID

Jika class anda implement Serializable, maka anda harus menetapkan serialVersionUID pada class tersebut. Biasanya oleh IDE, misalnya Eclipse anda punya pilihan untuk generate default (1L) atau generated berdasarkan variable dalam class anda.

Kegunaan serialVersionUID ini adalah menentukan apakah byte stream atau file hasil yang berhasil di-serialize dapat di-deserialize atau diubah kembali menjadi object dengan serialVersionUID yang sama.

Jadi, misalnya hari ini dengan serialVersionUID = 1L, kemudian melakukan serialisasi object menjadi satu file.

class Employee {
    private static final long serialVersionUID = 1L;
    String name;
    String email;
}

Besoknya, kita menambahkan class variable yang baru, yaitu mobileNo, maka jika kita tidak mengubah serialVersionUID, file tersebut dapat diubah ulang menjadi object. Walaupun mobileNo tetap null. Wajar, karena memang di awal variable tersebut tidak ada.

class Employee {
    private static final long serialVersionUID = 1L;
    String name;
    String email;
    String mobileNo;
}

Namun, ada kalanya kita tidak mau jika file lama bisa diubah menjadi object lagi. Mungkin dengan adanya penambahan variable baru bisa mengakibatkan business logic yang berbeda. Maka kita tinggal mengubah serialVersionUID, misalnya menjadi serialVersionUID = 2L.

Dengan mengubah serialVersionUID, maka jika program mencoba untuk mengubah file menjadi object (deserialisasi), maka program akan melempar exception, yaitu : InvalidClassException.

Demikian penggunaan Serializable di Java. Jika ada pertanyaan, bisa disampaikan melalui comments.

Simple CRUD Using Java, Hibernate and MySQL

You can view and download the complete source code of this tutorial from my github account.

In this tutorial, we will create a simple CRUD (Create Read Update Delete) User Management Console Application using Java, Hibernate and MySQL.

For this tutorial, we will need the following tools: (The older or newer version should also works).

1. Eclipse IDE for Java EE Developers (Indigo – ver. 3.7)

2. MySQL Community Server and MySQL Workbench (GUI Tool)

3. MySQL Connector for Java

4. Hibernate ORM


First, lets create the database and table for User using the following SQL scripts. Copy and run this script in the MySQL Workbench (GUI Tool) –> SQL Editor:

create database UserDB;
use UserDB;
grant all on UserDB.* to 'admin'@'localhost' identified by 'test'; 

CREATE TABLE UserDB.`users` (
  `userid` int(11) NOT NULL AUTO_INCREMENT,
  `firstname` varchar(45) DEFAULT NULL,
  `lastname` varchar(45) DEFAULT NULL,
  `dob` date DEFAULT NULL,
  `email` varchar(100) DEFAULT NULL,
  PRIMARY KEY (`userid`)
) ENGINE=InnoDB AUTO_INCREMENT=9 DEFAULT CHARSET=utf8

Go to eclipse and create a new project. Select File—>New—>Java Project. Enter “SimpleJava Hibernate” as the project name. Accept all the default value. Click Finish.

Please refer to this project directory in case you miss something along the way

structure

After creating the project, we need to add MySQL connector .jar file to our project build path. To do so, right click our newly created project “SimpleJavaHibernate” and choose Properties to open Properties window.

Select Java Build Path in the left side tree view to open right side detail view. Click Add External JARs.. button to open File browser dialog and point it to the MySQL connector .jar file which you have downloaded earlier.

Furthermore, we need to add Hibernate library to our project. Right click again the project and select Properties –>Java Build Path. This time, click Add Library. Select User Library from the list of selection. Click User Library. Next, click New. Enter “Hibernate” as the library name. Ok. Click our newly created User Library and click Add Jar. Select all the .jar file in both provided and required folder which located inside this directory structure [your hibernate download folder]—>hibernate-search-4.1.0.CR3-dist—>hibernate-search-4.1.0.CR3—>dist—>lib

Verify your build path to match configuration in picture below:

build path

This is all the configuration that we need to do and now let us get to the code.

Create four packages in the src folder.

  • com.daniel: contains the main method as the entry point for our console application
  • com.daniel.dao: contains the logic for database operation
  • com.daniel.model: contains the POJO (Plain Old Java Object). Each class in this package represents the database table. For this tutorial, however, we only have one table.
  • com.daniel.util : contains the class for initiating database connection

Next, create a new Java class. in com.daniel.model folder. Name it “User.java” and insert these following codes. Each of the variables in this class represents the field in USERS table in our database.

package com.daniel.model;

import java.util.Date;

public class User {

    private int userid;
    private String firstName;
    private String lastName;
    private Date dob;
    private String email;
    public int getUserid() {
        return userid;
    }
    public void setUserid(int userid) {
        this.userid = userid;
    }
    public String getFirstName() {
        return firstName;
    }
    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }
    public String getLastName() {
        return lastName;
    }
    public void setLastName(String lastName) {
        this.lastName = lastName;
    }
    public Date getDob() {
        return dob;
    }
    public void setDob(Date dob) {
        this.dob = dob;
    }
    public String getEmail() {
        return email;
    }
    public void setEmail(String email) {
        this.email = email;
    }
    @Override
    public String toString() {
        return "User [userid=" + userid + ", firstName=" + firstName
                + ", lastName=" + lastName + ", dob=" + dob + ", email="
                + email + "]";
    }    
}

Create a new class in com.daniel.util package and name it HibernateUtil.java. This class will read the configuration in our hibernate.cfg.xml file which handles the database connection to our MySQL server.

package com.daniel.util;

import org.hibernate.SessionFactory;
import org.hibernate.cfg.Configuration;

public class HibernateUtil {
    private static final SessionFactory sessionFactory = buildSessionFactory();

    private static SessionFactory buildSessionFactory() {
        try {
            // Create the SessionFactory from hibernate.cfg.xml
            return new Configuration().configure().buildSessionFactory();
        } catch (Throwable ex) {
            // Make sure you log the exception, as it might be swallowed
            System.err.println("Initial SessionFactory creation failed." + ex);
            throw new ExceptionInInitializerError(ex);
        }
    }

    public static SessionFactory getSessionFactory() {
        return sessionFactory;
    }
}

Create the aforementioned hibernate.cfg.xml configuration file directly under SimpleJavaHibernate folder. Put the following information inside.

<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE hibernate-configuration PUBLIC
 "-//Hibernate/Hibernate Configuration DTD 3.0//EN"
 "http://www.hibernate.org/dtd/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
    <session-factory>
        <!-- Database connection settings -->
        <property name="connection.driver_class">com.mysql.jdbc.Driver</property>
        <property name="connection.url">jdbc:mysql://localhost/UserDB</property>
        <property name="connection.username">admin</property>
        <property name="connection.password">test</property>

        <!-- JDBC connection pool (use the built-in) -->
        <property name="connection.pool_size">1</property>

        <!-- SQL dialect -->
        <property name="dialect">org.hibernate.dialect.MySQLDialect</property>

        <!-- Enable Hibernate's automatic session context management -->
        <property name="current_session_context_class">thread</property>

        <!-- Disable the second-level cache -->
        <property name="cache.provider_class">org.hibernate.cache.NoCacheProvider</property>

        <!-- Echo all executed SQL to stdout -->
        <property name="show_sql">true</property>

        <!-- Drop and re-create the database schema on startup -->
        <property name="hbm2ddl.auto">update</property>

        <!-- Mapping files -->
        <mapping resource="user.hbm.xml" />

    </session-factory>
</hibernate-configuration>

Notice that inside the file, there is an information about mapping resource which point to user.hbm.xml file. So, create an .xml file directly under SimpleJavaHibernate folder to map our Plain Old Java Object (POJO) variables with fields in database. Name it user.hbm.xml and write the following code inside

<?xml version="1.0"?>
<!DOCTYPE hibernate-mapping PUBLIC 
"-//Hibernate/Hibernate Mapping DTD 3.0//EN"
"http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd">

<hibernate-mapping>
    <class name="com.daniel.model.User" table="users">
        <id name="userid" type="int" column="userid">
            <generator class="increment" />
        </id>
        <property name="firstName">
            <column name="firstname" />
        </property>
        <property name="lastName">
            <column name="lastname" />
        </property>
        <property name="dob">
            <column name="dob" />
        </property>
        <property name="email">
            <column name="email" />
        </property>
    </class>
</hibernate-mapping>

Next, create a new class in com.daniel.dao package, name it UserDao.java. Dao stands for Data Access Object. It contains the logic for database operation.

package com.daniel.dao;

import java.util.ArrayList;
import java.util.List;

import org.hibernate.Query;
import org.hibernate.Session;
import org.hibernate.Transaction;

import com.daniel.model.User;
import com.daniel.util.HibernateUtil;

public class UserDao {

    public void addUser(User user) {
        Transaction trns = null;
        Session session = HibernateUtil.getSessionFactory().openSession();
        try {
            trns = session.beginTransaction();
            session.save(user);
            session.getTransaction().commit();
        } catch (RuntimeException e) {
            if (trns != null) {
                trns.rollback();
            }
            e.printStackTrace();
        } finally {
            session.flush();
            session.close();
        }
    }

    public void deleteUser(int userid) {
        Transaction trns = null;
        Session session = HibernateUtil.getSessionFactory().openSession();
        try {
            trns = session.beginTransaction();
            User user = (User) session.load(User.class, new Integer(userid));
            session.delete(user);
            session.getTransaction().commit();
        } catch (RuntimeException e) {
            if (trns != null) {
                trns.rollback();
            }
            e.printStackTrace();
        } finally {
            session.flush();
            session.close();
        }
    }

    public void updateUser(User user) {
        Transaction trns = null;
        Session session = HibernateUtil.getSessionFactory().openSession();
        try {
            trns = session.beginTransaction();
            session.update(user);
            session.getTransaction().commit();
        } catch (RuntimeException e) {
            if (trns != null) {
                trns.rollback();
            }
            e.printStackTrace();
        } finally {
            session.flush();
            session.close();
        }
    }

    public List<User> getAllUsers() {
        List<User> users = new ArrayList<User>();
        Transaction trns = null;
        Session session = HibernateUtil.getSessionFactory().openSession();
        try {
            trns = session.beginTransaction();
            users = session.createQuery("from User").list();
        } catch (RuntimeException e) {
            e.printStackTrace();
        } finally {
            session.flush();
            session.close();
        }
        return users;
    }

    public User getUserById(int userid) {
        User user = null;
        Transaction trns = null;
        Session session = HibernateUtil.getSessionFactory().openSession();
        try {
            trns = session.beginTransaction();
            String queryString = "from User where id = :id";
            Query query = session.createQuery(queryString);
            query.setInteger("id", userid);
            user = (User) query.uniqueResult();
        } catch (RuntimeException e) {
            e.printStackTrace();
        } finally {
            session.flush();
            session.close();
        }
        return user;
    }
}

Finally, create our main application class inside the com.daniel package and name it App.java

package com.daniel;

import java.sql.SQLException;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Date;

import com.daniel.dao.UserDao;
import com.daniel.model.User;
import com.daniel.util.DbUtil;

public class App {

    public static void main(String[] args) {
        UserDao dao = new UserDao();

        // Add new user
        User user = new User();
        user.setFirstName("Daniel");
        user.setLastName("NikoJdbc");
        try {
            Date dob = new SimpleDateFormat("yyyy-MM-dd").parse("1986-01-02");
            user.setDob(dob);
        } catch (ParseException e) {
            e.printStackTrace();
        }
        user.setEmail("daniel@example.com");
        dao.addUser(user);
//
//        // Update user
        user.setEmail("daniel@updatedJdbc.com");
        user.setUserid(1);
        dao.updateUser(user);

        // Delete user
        //dao.deleteUser(2);

        // Get all users
        for (User iter : dao.getAllUsers()) {
            System.out.println(iter);
        }

        // Get user by id
        System.out.println(dao.getUserById(8));

        try {
            DbUtil.getConnection().close();
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }

}

That is it. Run the project from eclipse and see the output from Console window

Simple CRUD Using Jsp, Servlet and MySQL

You can view and download the complete source code of this tutorial from my github account.

In this tutorial, we will create a simple CRUD (Create Read Update Delete) User Management Web Application using Jsp, Servlet and MySQL.

For this tutorial, we will need the following tools: (The older or newer version should also works). Moreover, basic Java knowledge is assumed.

1. Eclipse IDE for Java EE Developers (Indigo – ver. 3.7)

2. Apache Tomcat ver 7.0

3. MySQL Community Server and MySQL Workbench (GUI Tool)

4. MySQL Connector for Java

5. jstl.jar and standard.jar. You can get these jars from your Tomcat. Check in this directory : (your tomcat directory)—>apache-tomcat-7.0.26-windows-x86—>apache-tomcat-7.0.26—>webapps—>examples—>WEB-INF—>lib

I will tell you where you should put these jars later.

6. jQuery for javascript capability. In this case, we only use it for the datepicker component


First, lets create the database and table for User using the following SQL scripts:

create database UserDB;
use UserDB;
grant all on UserDB.* to 'admin'@'localhost' identified by 'test'; 

CREATE TABLE UserDB.`users` (
  `userid` int(11) NOT NULL AUTO_INCREMENT,
  `firstname` varchar(45) DEFAULT NULL,
  `lastname` varchar(45) DEFAULT NULL,
  `dob` date DEFAULT NULL,
  `email` varchar(100) DEFAULT NULL,
  PRIMARY KEY (`userid`)
) ENGINE=InnoDB AUTO_INCREMENT=9 DEFAULT CHARSET=utf8

Go to eclipse. Before we create a new project for our application, we need to setup the server. Select File—>New—>Other. From the tree, Select Server.

Choose Apache—>Tomcat v7.0 Server and set the runtime environment.

Next, create a new project. Select File—>New—>Dynamic Web Project.

Enter “SimpleJspServletDB” as the project name. Select target runtime to Apache Tomcat v7.0 which we already setup before. Click Finish.

Please refer to this project directory in case you miss something along the way

directory

Copy the standard.jar, mysql-connector jar and jstl jar to WEB-INF—>lib folder.

Create four packages in the src folder.

  • com.daniel.controller: contains the servlets
  • com.daniel.dao: contains the logic for database operation
  • com.daniel.model: contains the POJO (Plain Old Java Object). Each class in this package represents the database table. For this tutorial, however, we only have one table.
  • com.daniel.util : contains the class for initiating database connection

Next, create a new Java class. in com.daniel.model folder. Name it “User.java” and insert these following codes. Each of the variables in this class represents the field in USERS table in our database.

package com.daniel.model;

import java.util.Date;

public class User {

    private int userid;
    private String firstName;
    private String lastName;
    private Date dob;
    private String email;
    public int getUserid() {
        return userid;
    }
    public void setUserid(int userid) {
        this.userid = userid;
    }
    public String getFirstName() {
        return firstName;
    }
    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }
    public String getLastName() {
        return lastName;
    }
    public void setLastName(String lastName) {
        this.lastName = lastName;
    }
    public Date getDob() {
        return dob;
    }
    public void setDob(Date dob) {
        this.dob = dob;
    }
    public String getEmail() {
        return email;
    }
    public void setEmail(String email) {
        this.email = email;
    }
    @Override
    public String toString() {
        return "User [userid=" + userid + ", firstName=" + firstName
                + ", lastName=" + lastName + ", dob=" + dob + ", email="
                + email + "]";
    }    
}

Create a new class in com.daniel.util package and name it DbUtil.java. This class handles the database connection to our MySQL server. In this class, we read a .properties file which contains the information necessary for the connection.

package com.daniel.util;

import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.util.Properties;

public class DbUtil {

    private static Connection connection = null;

    public static Connection getConnection() {
        if (connection != null)
            return connection;
        else {
            try {
                Properties prop = new Properties();
                InputStream inputStream = DbUtil.class.getClassLoader().getResourceAsStream("/db.properties");
                prop.load(inputStream);
                String driver = prop.getProperty("driver");
                String url = prop.getProperty("url");
                String user = prop.getProperty("user");
                String password = prop.getProperty("password");
                Class.forName(driver);
                connection = DriverManager.getConnection(url, user, password);
            } catch (ClassNotFoundException e) {
                e.printStackTrace();
            } catch (SQLException e) {
                e.printStackTrace();
            } catch (FileNotFoundException e) {
                e.printStackTrace();
            } catch (IOException e) {
                e.printStackTrace();
            }
            return connection;
        }

    }
}

Create the properties file directly under the src folder. Create a new file, name it db.properties. Put the following information inside.

driver=com.mysql.jdbc.Driver
url=jdbc:mysql://localhost:3306/UserDB
user=admin
password=test

Next, create a new class in com.daniel.dao package, name it UserDao.java. Dao stands for Data Access Object. It contains the logic for  database operation.

package com.daniel.dao;

import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.ArrayList;
import java.util.List;
import com.daniel.model.User;
import com.daniel.util.DbUtil;

public class UserDao {

    private Connection connection;

    public UserDao() {
        connection = DbUtil.getConnection();
    }

    public void addUser(User user) {
        try {
            PreparedStatement preparedStatement = connection
                    .prepareStatement("insert into users(firstname,lastname,dob,email) values (?, ?, ?, ? )");
            // Parameters start with 1
            preparedStatement.setString(1, user.getFirstName());
            preparedStatement.setString(2, user.getLastName());
            preparedStatement.setDate(3, new java.sql.Date(user.getDob().getTime()));
            preparedStatement.setString(4, user.getEmail());
            preparedStatement.executeUpdate();

        } catch (SQLException e) {
            e.printStackTrace();
        }
    }

    public void deleteUser(int userId) {
        try {
            PreparedStatement preparedStatement = connection
                    .prepareStatement("delete from users where userid=?");
            // Parameters start with 1
            preparedStatement.setInt(1, userId);
            preparedStatement.executeUpdate();

        } catch (SQLException e) {
            e.printStackTrace();
        }
    }

    public void updateUser(User user) {
        try {
            PreparedStatement preparedStatement = connection
                    .prepareStatement("update users set firstname=?, lastname=?, dob=?, email=?" +
                            "where userid=?");
            // Parameters start with 1
            preparedStatement.setString(1, user.getFirstName());
            preparedStatement.setString(2, user.getLastName());
            preparedStatement.setDate(3, new java.sql.Date(user.getDob().getTime()));
            preparedStatement.setString(4, user.getEmail());
            preparedStatement.setInt(5, user.getUserid());
            preparedStatement.executeUpdate();

        } catch (SQLException e) {
            e.printStackTrace();
        }
    }

    public List<User> getAllUsers() {
        List<User> users = new ArrayList<User>();
        try {
            Statement statement = connection.createStatement();
            ResultSet rs = statement.executeQuery("select * from users");
            while (rs.next()) {
                User user = new User();
                user.setUserid(rs.getInt("userid"));
                user.setFirstName(rs.getString("firstname"));
                user.setLastName(rs.getString("lastname"));
                user.setDob(rs.getDate("dob"));
                user.setEmail(rs.getString("email"));
                users.add(user);
            }
        } catch (SQLException e) {
            e.printStackTrace();
        }

        return users;
    }

    public User getUserById(int userId) {
        User user = new User();
        try {
            PreparedStatement preparedStatement = connection.
                    prepareStatement("select * from users where userid=?");
            preparedStatement.setInt(1, userId);
            ResultSet rs = preparedStatement.executeQuery();

            if (rs.next()) {
                user.setUserid(rs.getInt("userid"));
                user.setFirstName(rs.getString("firstname"));
                user.setLastName(rs.getString("lastname"));
                user.setDob(rs.getDate("dob"));
                user.setEmail(rs.getString("email"));
            }
        } catch (SQLException e) {
            e.printStackTrace();
        }

        return user;
    }
}

Finally, create a new Servlet inside the com.daniel.controller package and name it UserController.java

package com.daniel.controller;

import java.io.IOException;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Date;

import javax.servlet.RequestDispatcher;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import com.daniel.dao.UserDao;
import com.daniel.model.User;

public class UserController extends HttpServlet {
    private static final long serialVersionUID = 1L;
    private static String INSERT_OR_EDIT = "/user.jsp";
    private static String LIST_USER = "/listUser.jsp";
    private UserDao dao;

    public UserController() {
        super();
        dao = new UserDao();
    }

    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
        String forward="";
        String action = request.getParameter("action");

        if (action.equalsIgnoreCase("delete")){
            int userId = Integer.parseInt(request.getParameter("userId"));
            dao.deleteUser(userId);
            forward = LIST_USER;
            request.setAttribute("users", dao.getAllUsers());    
        } else if (action.equalsIgnoreCase("edit")){
            forward = INSERT_OR_EDIT;
            int userId = Integer.parseInt(request.getParameter("userId"));
            User user = dao.getUserById(userId);
            request.setAttribute("user", user);
        } else if (action.equalsIgnoreCase("listUser")){
            forward = LIST_USER;
            request.setAttribute("users", dao.getAllUsers());
        } else {
            forward = INSERT_OR_EDIT;
        }

        RequestDispatcher view = request.getRequestDispatcher(forward);
        view.forward(request, response);
    }

    protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
        User user = new User();
        user.setFirstName(request.getParameter("firstName"));
        user.setLastName(request.getParameter("lastName"));
        try {
            Date dob = new SimpleDateFormat("MM/dd/yyyy").parse(request.getParameter("dob"));
            user.setDob(dob);
        } catch (ParseException e) {
            e.printStackTrace();
        }
        user.setEmail(request.getParameter("email"));
        String userid = request.getParameter("userid");
        if(userid == null || userid.isEmpty())
        {
            dao.addUser(user);
        }
        else
        {
            user.setUserid(Integer.parseInt(userid));
            dao.updateUser(user);
        }
        RequestDispatcher view = request.getRequestDispatcher(LIST_USER);
        request.setAttribute("users", dao.getAllUsers());
        view.forward(request, response);
    }
}

Now, it’s time for us to create the jsp, the view for our application. Under the WebContent folder, create a jsp file, name it index.jsp

<%@ page language="java" contentType="text/html; charset=EUC-KR" pageEncoding="EUC-KR"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=EUC-KR">
<title>Insert title here</title>
</head>
<body>
<jsp:forward page="/UserController?action=listUser" />
</body>
</html>

This jsp serves as the entry point for our application. In this case, it will redirect the request to our servlet to list all the users in the database.

Next, create the jsp to list all the users in the WebContent folder. Name it listUser.jsp

<%@ page language="java" contentType="text/html; charset=EUC-KR" pageEncoding="EUC-KR"%>
<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c"%>
<%@ taglib uri="http://java.sun.com/jsp/jstl/fmt" prefix="fmt"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">

<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=EUC-KR">
<title>Show All Users</title>
</head>
<body>
    <table border=1>
        <thead>
            <tr>
                <th>User Id</th>
                <th>First Name</th>
                <th>Last Name</th>
                <th>DOB</th>
                <th>Email</th>
                <th colspan=2>Action</th>
            </tr>
        </thead>
        <tbody>
            <c:forEach items="${users}" var="user">
                <tr>
                    <td><c:out value="${user.userid}" /></td>
                    <td><c:out value="${user.firstName}" /></td>
                    <td><c:out value="${user.lastName}" /></td>
                    <td><fmt:formatDate pattern="yyyy-MMM-dd" value="${user.dob}" /></td>
                    <td><c:out value="${user.email}" /></td>
                    <td><a href="UserController?action=edit&userId=<c:out value="${user.userid}"/>">Update</a></td>
                    <td><a href="UserController?action=delete&userId=<c:out value="${user.userid}"/>">Delete</a></td>
                </tr>
            </c:forEach>
        </tbody>
    </table>
    <p><a href="UserController?action=insert">Add User</a></p>
</body>
</html>

In this jsp, we use JSTL to connect between the jsp and the servlet. We should refrain from using scriplet inside the jsp because it will make the jsp more difficult to maintain. Not to mention it will make the jsp looks ugly.

Next, create a new jsp in WebContent folder and name it user.jsp

<%@ page language="java" contentType="text/html; charset=EUC-KR" pageEncoding="EUC-KR"%>
<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c"%>
<%@ taglib uri="http://java.sun.com/jsp/jstl/fmt" prefix="fmt"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=EUC-KR">
<link type="text/css"
    href="css/ui-lightness/jquery-ui-1.8.18.custom.css" rel="stylesheet" />
<script type="text/javascript" src="js/jquery-1.7.1.min.js"></script>
<script type="text/javascript" src="js/jquery-ui-1.8.18.custom.min.js"></script>
<title>Add new user</title>
</head>
<body>
    <script>
        $(function() {
            $('input[name=dob]').datepicker();
        });
    </script>

    <form method="POST" action='UserController' name="frmAddUser">
        User ID : <input type="text" readonly="readonly" name="userid"
            value="<c:out value="${user.userid}" />" /> <br /> 
        First Name : <input
            type="text" name="firstName"
            value="<c:out value="${user.firstName}" />" /> <br /> 
        Last Name : <input
            type="text" name="lastName"
            value="<c:out value="${user.lastName}" />" /> <br /> 
        DOB : <input
            type="text" name="dob"
            value="<fmt:formatDate pattern="MM/dd/yyyy" value="${user.dob}" />" /> <br /> 
        Email : <input type="text" name="email"
            value="<c:out value="${user.email}" />" /> <br /> <input
            type="submit" value="Submit" />
    </form>
</body>
</html>

Lastly, check the web.xml file located in WebContent—>WEB-INF folder in your project structure. Make sure it looks like this

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" id="WebApp_ID" version="2.5">
  <display-name>SimpleJspServletDB</display-name>
  <welcome-file-list>
    <welcome-file>index.jsp</welcome-file>
  </welcome-file-list>
  <servlet>
    <description></description>
    <display-name>UserController</display-name>
    <servlet-name>UserController</servlet-name>
    <servlet-class>com.daniel.controller.UserController</servlet-class>
  </servlet>
  <servlet-mapping>
    <servlet-name>UserController</servlet-name>
    <url-pattern>/UserController</url-pattern>
  </servlet-mapping>
</web-app>

That is it. Right click the project name and run it using Run As–>Run on server option.

usermgt