h1. Request Details
h2. Background
Design review : https://docs.google.com/document/d/1HH7L2aVou8khyq5ljr30V_0GtfwRdZTo6Y_351bWfXU/edit#
In order to pccommodate our migration out of tap, and also supports the business crucial hotel import process (for marketabillity), we need 2 EC2 instances in order to create ddt. aprht will serve the content-ops. We also need to add SendMessage policy to in aprhdjw-hotelimport-queue in order to allow manual import trigger.
h2. Purpose
As a logic processor for the tools behind accommodation dashboard (ashtool). The current immediate need that we have is to manually submit a hotel import request from content-ops (which will be executed by aprhdjw) as well as to enable hotel import & merging tool, which will curate (selectively merge/add/remove/modify) static hotel content. We will add hotel static content management (asset/hotel policy/attribute/etc) in the futrue
h2. Impact
Enable us to move out our tools from tap. Enable hotel import tools (enhance hotel import), which in turn increases our marketabiblity in SEA and other regions (our competitor is getting ahead from us in this area very rapidly). Hotel content curation, which increases buyabillity (good quality hotel asset/accurate description, overview, hotel policies)
h2. Risk
A moderate risk is involved if the validation from the front-end spec is not enabled correctly, this might affect our production data. we will make a solid validation and content-ops themselves has a procedure to manually check data from our DWH for bad/corrupt data.
A very tiny increase in db load (to display and upsert data), since human is processing them, they wont be able to flood db like an automated process do
h2. Resources
h3. EC2
h4. Configuration
{code}
count = "2"
instance_type = "m4.large"
ebs_optimized = "false"
disable_api_termination = "false"
root_block_device = {
volume_type = "gp2"
volume_size = "8"
delete_on_termination = "true"
}
tags = {
Service = "aprhdt"
Cluster = "aprhdt-app"
ProductDomain = "apr"
Application = "java-7"
Environment = "production"
Description = "Accom Product Hotel Data Tools"
}
{code}
h3. ALB
h4. Configuration
{code}
name = "aprhdt-lbint-<random-id>"
security_groups = "aprhdt-lbint"
internal = "true"
idle_timeout = "60"
enable_deletion_protection = "false"
tags = {
Name = "aprhdt-lbint-<random-id>"
Service = "aprhdt"
ProductDomain = "apr"
Environment = "production"
Description = "Internal Accom Product Hotel Data Tools load balancer"
}
{code}
h4. Listener
{code}
port = "443"
protocol = "HTTPS"
default_action {
target_group_arn = "aprhdt-app"
type = "forward"
}
{code}
h4. Target Group
{code}
name = "aprhdt-app"
port = "61033"
protocol = "HTTP"
deregistration_delay = "300"
tags = {
Name = "aprhdt-app"
Service = "aprhdt"
ProductDomain = "apr"
Environment = "production"
Description = "Target group for Accom Product Hotel Data Tools app"
}
health_check = {
interval = "10"
path = "/healthcheck"
port = "traffic-port"
protocol = "HTTP"
timeout = "5"
healthy_threshold = "5"
unhealthy_threshold = "2"
matcher = "200"
}
{code}
h4. DNS Record
aprhdt.main.tvlk.cloud
h3. SQS Queue
h4. SQS Policy
{code}
producers = ["aprhdt-app"]
allowed_action = [
"sqs:SendMessage",
]
{code}
h3. Connectivity
h4. Rules
SourceId Destination from_port to_port protocol
aprhdt-app aprhd-lbint 443 TCP
aprhdt-app aprnes-lbint 443 TCP
aprhdt-app axptcnt-lbint 443 TCP
ashtool-app aprhdt-lbint 443 TCP
[^connectivity.csv]