Featured

Using S3 based VMDK to boot EC2

This is the post excerpt.

So the first question ,you would ask is “why ?” the answers are pretty compelling but i would put the reasons as

you just want to upload your VM estate straight into S3 ,and then use the fat pipe between EC2 and S3 (40 – 50 megabits per second) to use S3 to boot EC2 .

you want to have a bootable AMI instance but AWS does not seem to allow this,so its a bit like in VM using a bootable ISO to build your VM , AWS does not allow you to mount ISO and then reboot their AMI’s from disc.

 

The tones the DTMF tones

dem tones

when i was younger and living with my older sister – she put a physical phone lock onto the phone dial so it couldnt be used , such as the one below

This meant that the phone was unusable to make calls – you could receive calls fine . Since i was quite smitten with a girl that i wanted to call i observed that when the lock was off and i could make a call, when you dialed a 3 you would hear like 3 connections as the analogue swicthing units connected for a 3

This was exactly the same sound as you would hear when you lifted the reciever and the circuit to the telco was esatblished , if you lifted the reciver and pushed down the swicth that let the phone know you was off hook you can hear the dialling – so if you wanted to dial a sequence of numbers you could just tap them out on the phone and then you would get connected

later on in my career when i worked for a telco the analog guys used to have these swicthing units on thier desks ( like the ones below)

they where platinum tipped and so quite valuble, it was the noise of these connecting that i heard all those years ago ,when you depressed the off hook button on the phone the voltage changes went across the local loop to the exchange , where the stowgers would register the voltage change as a signalling attempt

fast forward to my latest issue – i have an audiocodes M800 gateway with a SIP trunk connection to a carrier , if i dial through the trunk to the M800 and then forward the extension to the CC where we have to enter some DTMF – the CC could not hear the tones

Hunting the iceberg of internal number ranges

teams with optimisation do not play nice with e164 addressing – that is to say for the teams direct optimisation to work we need a e164 adress for hosts that in the site subnet

i am a bit pressed for time – but this is the problem when you deploy the e1 you may size it for growth and get it wrong- resulting in maybe some PBX based routing and some internal extensions which dont immediatly translate into a true e164 address

but how do you find them and know you got them the first way would be a simple extension count if you got more endpoints than DDIs in your e1 circuit then a PBX designer has got clever with the dial plan

The phone number mask trick

This allows you to make a call from an internal number by using a mask that is in the public DDI range. This means the E1 circuit will accept it but you cannot ring it back but you could use an IVR to redirect to the internal range for inbound

Stick it in the Public DDI

best way to tackle this is with some SQL

run sql select dnorpattern, d.description, e164mask, tkpatternusage from devicenumplanmap inner join numplan on fknumplan=numplan.pkid join device d on fkdevice = d.pkid where e164mask= “+52867711XXXX”

choose your mask against Publice DDI ranges if you got extensions in this range that dont trannslate

The SBC outbound source trick

This just makes the manipulation for the public range at the SBC

Normalising Internal to External

its doing the same thing we are normalising to a public number – this would be good for contact centre routing instead of making them non dialable by some other means

Testing Direct Media optimisation

For direct media optimisation to work we need ;

  • subnet , to punch the media to from the internal sip interface to the site E1
  • It aslo must see VPN addresses as a site address
  • Calls from the public internet are external calls
  • The WebRTC element must work for both
  • The media must be encypted
  • Call forwarding and other re directions must be supported
  • we got to able to transfer
  • we got to eb able to hold snd retrieve
  • Internal IP streams flow via Internal SBC sip interfaces
  • External IP streams flow via external SBC SIP interfaces

so in its way this defines a test list

Dump SBCs in the environment

Get-CsOnlinePSTNGateway is going to dump all the PSTN gateways – which are configured to allow the internet reach voice networks – they are a list of SBC SIP interfaces

A PSTN gateway

The site E1 SBC looks like this

Identity : mybiz.com
InboundTeamsNumberTranslationRules : {}
InboundPstnNumberTranslationRules : {}
OutboundTeamsNumberTranslationRules : {}
OutboundPstnNumberTranslationRules : {}
Fqdn : mybiz.com
SipSignalingPort : 5067
FailoverTimeSeconds : 10
ForwardCallHistory : True
ForwardPai : False
SendSipOptions : True
MaxConcurrentSessions : 1000
Enabled : True
MediaBypass : True
GatewaySiteId : Gateway ID
GatewaySiteLbrEnabled : False
GatewayLbrEnabledUserOverride : False
FailoverResponseCodes : 408,503,504
PidfLoSupported : False
MediaRelayRoutingLocationOverride :
ProxySbc : Teams Voice proxy
BypassMode : Always
Description :

when you configure a test account

RunspaceId : 56be03ea-2ddc-4f15-be4c-5b40a051b355
Identity : [email protected]
SipAddress : sip:[email protected]
ServiceNumber :
TollFreeServiceNumber :
ConferenceId : 0
BridgeId : 4bb5ceda-d994-4121-a671-44d92ffda798
BridgeName : Conference Bridge
Tenant : ceb177bf-013b-49ab-8a9c-4abce32afc1e
AllowPstnOnlyMeetings : False
AllowTollFreeDialIn : False
LeaderPin : *

UserPrincipalName : [email protected]
HostedVoiceMail : True
EnterpriseVoiceEnabled : True
OnPremLineURI : tel:+78126482513
OnlineVoiceRoutingPolicy : 402-Tosno-Users
TeamsCallingPolicy : AllowCalling

Troubleshooting Inbound DMO at the SBC

Global truths for m800 across the world

There are a few global truths – if we have an M800 with with an E1 – than that is the resource we want to control at a site based SBC – i mean we could by means of manipulating call number masks and ARM ( brain ) maybe exit via one SBC and enter by another – this is acceptable in the SBC verse and i will test this later but here we want to control what the site based E1 can do via the SBC – so before we brought a SIP interface into the SBC – this would of meant we manipulate in the Brain – and its slave army of SBC’s but if we need to manaipulate we can do it in the gateway

when we have a telephony resource in a gateway we can punch media through to the e1 via the SBC to GW interface that i think got mentioned in the m800 set up thread

signalling interfaces

the GW to SBC interface passes back the ISDN signalling back to the SBC when the communication has to go the other way the IP group for GW2SBC is shown below

The IP group looks like

One of the most imortant settings here is the used by routing server setting (ARM) without this it wont work

IP group SBC2 GW 2

classified by proxy this feels like a control

whats is classification ?

Classification is the process in which the AudioCodes gateway or SBC uses to associate the source of a SIP dialog request with a IP Group. By associating the source of the call with a defined IP Group entity, the SBC can then use this association to determine how the call will be processed. The classification rules cannot only be used to define the source of the SIP request for routing/manipulation purposes, but it canalso be used to block unwanted call attempts.

How does AudioCodes classify incoming calls?

AudioCodes uses three stages when attempting to classify an incoming SIP dialog request.

  1. The first stage is to attempt to classify the SIP dialog by first comparing if the source of the
    request already exists in the device’s registration database.
  2. If the device is not found in the AOR/registration database, the source can then be compared
    against the defined IP Address/ports used in the ProxySets table. If the source is found to be a match with
    one of the defined ProxySet entries, the call will be then associated with the source IPGroup which uses
    that ProxySet. This option can be enabled through the “Classify by Proxy Set” definition found in the
    IPGroup table definition.
  3. Finally, if the call does not match either the registration database or the definitions in the
    ProxySet table, the AudioCodes device will attempt to match the source by comparing the SIP request
    with the criteria defined in the Classification Table.
Are some classification options better than others?

For security purposes, it is recommended to use the Classification rules as defined in the Classification
Table, as the Classification Rules allows the user to define additional SIP message characteristics that can be used to increase the strictness and security of the classification process. Classifying the SIP request source by the Proxy Set is allowed but is only recommended if the IP address of the IP Group is unknown or is configured with a FQDN.

M800 GW2SBC Advanced properties
SIP interfaces

covered in other parts of the blog but i am going to do a quick primer here

we add a 2nd SIP int when we want to get our SBC on

the Internal SIP is the GW application and the SBC interface is for the SBC (application type)

SBC IP interface

the proxy set and IP groups defines the signalling interfaces and ports needed to go from GW to SBC and SBC back to GW to punch the media back to the E1

Though the actual properties of an interface is defined in the profile , if there are tweaks to be made to get something to work the profile seesm to be the place

My question would be where the profile for the SBC

ARM basically controls the profile back to the regional and the GW controls the E1

IP profiles are quite meaty configs Regional ARM profile
SBC signalling ARM profile regional
Part 1 of the SBC signalling properties
so whats the Profile for the GW interface look like

so whats the profile for the gateway look like ?

Gateway genrarl IP profile

The first difference is that it isnt created by the routing server

Gateway Profile SBC signalling M800

under the profiles this defines what gets translated from the SBC interface to the gateway

Desiging DMO SBC routing environments

This comes down to good IP management i mean , the bible is here

https://docs.microsoft.com/en-us/microsoftteams/direct-routing-media-optimization-configure

the first big takeaway is this

External trusted IPs are the Internet external IPs of the enterprise network. These IP’s are the IP addresses used by Microsoft Teams clients when they connect to Microsoft 365. You need to add these external IPs for each site where you have users using Local Media Optimization.

so this should be an easy check by using a get instead

Get-CsTenantTrustedIPAddress

what you get is a list of

Identity : 10.0.0.2
MaskBits : 32
Description : Site Description
IPAddress : 10.0.0.2

Identity : 65.26.78.98
MaskBits : 32
Description : site description
IPAddress : 65.26.78.98

Inbound DMO signalling

basically we got to see a PSTN event – in this architcture we invite the reginonal but any 200OK has got to point back to the M800 SBC to GW interface

Also worth mentioning that this is for a hybrid environment where we have a regional SIP trunk per region where we interwork with a PBX

when we have a fully migrated user – there wouldnt be an invite to the regional

let see what the PSTN say

Troubleshooting SIP call forwards Teams / Cisco

I am currently trying to solve a call forward issue in a SIP cloud all call forward or SIP redirection call flows have two legs

  1. The inbound Invite to the forwarded SIP endpoint
  2. The outbound Invite with a Diversion header

Where we must cross an SBC we have number plan dependencies – SIP trunk providers will only accept diversion headers in a specific number format, which also must be part of the extension range for the device

If you stray from the defined number plan -you may get a reject from the SIP trunk provider this is going to be in the format in one of the following headers in the outbound invite

1.Diversion header format and some screening indications

2.P asserted identity PAI

3.Remote party ID with some screening indications

its important to realize that its a single call flow with two legs -that stay up – like all cloud UC offerings there is a web API call in the midst of the SIP signaling

The call flow is a bit of a shell game the headers swap values to make the call forward work and the way the architecture is such that the inbound and outbound invites are processed on separate SBC both legs have got to stay in state for the forward to work

  1. MSP Invites the SBC
  2. SBC makes an API call to ARM
  3. SBC sends invite to IP PBX

At this point the IP PBX is building the outbound invite with the diversion header which will invite the SBC again which in turn makes another API call which will then get the SBC to invite back to the MSP

The 180 that gets generated is actually from the call forward via the 2nd leg

The inbound invite doesn’t need a registered endpoint the CUCM is going to generate the new outbound invite because of the existence of the diversion header , the software will build the invite basically to the call forward and send the diversion header back to the MSP trunk

The remote party here in this outbound invite is the original inbound invite calling party we send this back to the MSP so it can connect the calls

Inbound invite

Inbound invite ladder table
Just to set one end of the shell game from the MSP

after the ARM API

SIP re invitation to the SIP UA in the enterprise

Outbound invite

This is where we see how the SIP UA has modified the inbound invite

The first shell of the call forward

the invite is shown below – the SBC is using a delayed offer for SDP the diversion header itself isn’t really used since its just a redirect flag but MSP use it to control SIP redirection

after the ARM API

when it all goes wrong

This is an example of a call forward that doesn’t work when something has changed in the configuration

inbound invite

but remember that these two legs set each other up so there are two possibilities here

  • There is something about the invite that is getting forbidden hence if this is the case all we would need to analyze is the f5 invite from both flows to see the differences
  • The forbidden is being triggered by the outbound invite with the diversion header and extra fields and this proxies back to the inbound invite so the 403 in the outbound just proxies back to the 403 in the inbound leg

outbound invite

For this failure we need to investigate why the MSP rejects with Forbidden

MSP database routing

MSP will authenticate calls depending on what is configured in their routing database. We also need to make a few definitions about the number formats

Private Number (rnx-ext)E164
534-640444-156478-6404
an example of PNP vs +CC

MSP has stated that routing database authenticate is based on 2 points

  1. Diversion header is presented – should be +CCxxx or private number configured on the MSP side
  2. From header should be +CCxxx or private number configured on the MSP side

so in the case for the above extension 6404 has a call forward to 447795642799 the diversion header is shown below

“Rolf Thomas” (sip:[email protected]);reason=unconditional;counter=1;screen=yes;privacy=off

534-6404 is the private number of 44-156478-6404 so the call is authenticated from the site

so in the case of the failure seen at RNX 660 the PNP was not configured on the OBS side , so we where sending the diversion header in the right format

Network Access Control

Steel mountain Data centre- protected by Network ACLs

ACL controls what can establish a TCP/IP socket connection to specific ports which means ACLS control what services you can get to on a network

ACL Nut

ACLS have a mask which determines what gets passed and what gets dropped

They use an inverse mask , an example of this is 0.0.0.255 , where ever there is a 0 you must consider the network address wherever there is a 1 you can ignore it so the inverse of a normal mask

for example if you got an ACL that looks like this

10.1.1.0 0.0.0.255

the network address to be processed is 10.1.1.0 in binary is

00001010 .00000001.00000001.00000000

and the mask in binary is

00000000.00000000.00000000.11111111

  • 0 indicates that the network address must be considered (exact match)
  • 1 in the mask is a don’t care

10.1.1.”we don’t care what’s in this last octet since here it is all ones “so the address we process starts with 10.1.1.1 and stops with 10.1.1.255

you can subtract the normal mask from 255.255.255.255 to get the inverse mask

255.255.255.255-255.255.255.0 = 0.0.0.255

ACL cuts

source 0.0.0.0 / wildcard 255.255.255.255 means “any ” how so ?? because basically we are flying all 1’s against 0.0.0.0 it can be any value between 0 and 255 so that’s pretty much everything

if we got a source/wildcard 10.1.1.2/0.0.0.0 that means we got to match exactly in every position so that means the same as host 10.1.1.2

192.168.32.0/24 (192.168.32.0 – 192.168.32.255)

192.168.33.0/24 (192.168.33.0-192.168.33.255)

192.168.34.0/24 you get the idea

192.168.35.0/24

192.168.36.0/24

192.168.37.0/24

192.168.38.0/24

192.168.39.0/24

in these addresses the first two octets and the last octet is the same for each of these networks

Decimal1286432168421
3200100000
3300100001
3400100010
3500100011
3600100100
3700100101
3800100110
3900100111
The first five bits match that is to say there is no differences in these columns

This means that those 8 original networks can be summarized as one

If you look at the anatomy of a network acl or NACL it looks a bit like this

access-list 10 permit 192.168.146.0 0.0.1.255

this permits traffic from 192.168.146.0 – 192.168.147.254

access-list 11 permit 192.168.148.0 0.0.1.255

this permits traffic from 192.168.148.0-192.168.149.254

How ACLS are processed

Traffic that comes into a router via an interface is compared to the ACL entries based on the order that they have been configured in , so they run from the top ACL statements to the last ACL statement of an access-list

new statements are added to the end of the list

The router will process network traffic until it gets to the end of the list , this traffic would be denied ( since no match in the configured ACL)

“There is an implied deny for traffic that is not permitted”

for this reason you should have the frequently hit entries at the top of the list

A single ACL entry with only 1 deny has the effect of denying all traffic , you must have at least one permit statement

these two ACLS have the same effect

access-list 101 permit ip 10.1.1.0 0.0.0.255 172.16.1.0 0.0.0.255

!--- This command is used to permit IP traffic from 10.1.1.0 !--- network to 172.16.1.0 network. All packets with a source !--- address not in this range will be rejected.

access-list 102 permit ip 10.1.1.0 0.0.0.255 172.16.1.0 0.0.0.255

access-list 102 deny ip any any

!--- This command is used to permit IP traffic from 10.1.1.0 !--- network to 172.16.1.0 network. All packets with a source !--- address not in this range will be rejected.

In this example, the last entry is sufficient. You do not need the first three entries because TCP includes Telnet, and IP includes TCP, User Datagram Protocol (UDP), and Internet Control Message Protocol (ICMP).


!--- This command is used to permit Telnet traffic !--- from machine 10.1.1.2 to machine 172.16.1.1.

access-list 101 permit tcp host 10.1.1.2 host 172.16.1.1 eq telnet

!--- This command is used to permit tcp traffic from !--- 10.1.1.2 host machine to 172.16.1.1 host machine.

access-list 101 permit tcp host 10.1.1.2 host 172.16.1.1

!--- This command is used to permit udp traffic from !--- 10.1.1.2 host machine to 172.16.1.1 host machine.

access-list 101 permit udp host 10.1.1.2 host 172.16.1.1

!--- This command is used to permit ip traffic from !--- 10.1.1.0 network to 172.16.1.10 network.

access-list 101 permit ip 10.1.1.0 0.0.0.255 172.16.1.0 0.0.0.255                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              

you can use ports and traffic types in ACLS , which is where we can control what the IP networks can access so for example

access-list 102 permit ICMP host 10.1.1.1 host 172.16.1.1 14

becomes

access-list 102 permit ICMP host 10.1.1.1 host 172.16.1.1 Timestamp-Reply

Where to place ACL’s

It is a good practice to apply the ACL on the interface closest to the source of the traffic. As shown in this example, when you try to block traffic from source to destination, you can apply an inbound ACL to E0 on router A instead of an outbound list to E1 on router C. An access-list has a deny ip any any implicitly at the end of any access-list.

DHCP and ACL’s

If traffic is related to a DHCP request and if it is not explicitly permitted, the traffic is dropped because when you look at DHCP request in IP, the source address is s=0.0.0.0 (Ethernet1/0), d=255.255.255.255, len 604, rcvd 2 UDP src=68, dst=67.

Note that the source IP address is 0.0.0.0 and destination address is 255.255.255.255. Source port is 68 and destination 67. Hence,

you should permit this kind of traffic in your access-list else the traffic is dropped due to implicit deny at the end of the statement.

The motor way analogy

Traffic on a IP network is a bit like traffic on a road network and much like those meanings where you get inbound and outbound in response to road blocks for example the same sorts of meanings occur

  • Out—Traffic that has already been through the router and leaves the interface. The source is where it has been, on the other side of the router, and the destination is where it goes.
  • In—Traffic that arrives on the interface and then goes through the router. The source is where it has been and the destination is where it goes, on the other side of the router.
  • Inbound —If the access list is inbound, when the router receives a packet, the Cisco IOS software checks the criteria statements of the access list for a match. If the packet is permitted, the software continues to process the packet. If the packet is denied, the software discards the packet.
  • Outbound—If the access list is outbound, after the software receives and routes a packet to the outbound interface, the software checks the criteria statements of the access list for a match. If the packet is permitted, the software transmits the packet. If the packet is denied, the software discards the packet.

EDITING ACL’s

so this is in bold because there are some rules here that i was not aware and i have fallen foul of

rule 1 – when you edit an ACL entry and delete a specific line from an ACL the whole ACL is deleted

so whenever you are working with an ACL you should take the trouble to make a note of what was there in the first place make the change you need in the editor and then re-apply it

rule 2 – if you are working with a numbered ACL list , if in your edit you attach a number than it will find its place in the ACL

DEBUGGING ACL’s

Use the debug ip packet 101 or debug ip packet 101 detail command in order to begin the debug process.

ACL types

standard;

This is the command syntax format of a standard ACL.

access-list access-list-number {permit|deny} 
{host|source source-wildcard|any}

FS cheat sheet

 

 

Free switch (fs) is basically open source software defined technology stack that supports the following endpoints and is a class 5 soft switch , IVR platform and high quality conference resource and supports principally SIP but also Google talk and Skype

 

Module Types FS

The following modules comprise the FS IP PBX

  • Endpoint for talking to VOIP , PSTN , Google Talk , Skype etc
  • Application  there are hundreds of application modules for conferencing , voicemail , IVR that are contained in the tools module
  • Dialplan responsible for routing calls based on call context information such as caller ID , the default dialplan module is in the XML dialplan
  • Directory provides logins and configurations that users can register with FS
  • Codecs that are used for the encoding of media streams
  • File Formats play audio files whose common common formats are supported by the sndfiles module which is included by default
  • Loggers records log messages such as log file.xml_cdr
  • Languages languages that are supported for scripting javascript is supported

 

Configuration files in FS

All configuration files are written in XML the default configuration that comes with the virtual machine is located in /etc/freeswitch

One of the first differences that is evident between Cisco unified call manager and FS is that FS has inbuilt support for SMS  under the chatplan whereas in the Cisco framework you would have to provision load of middle ware and somewhere would be a node.js app but thats a story for another day

How to register Yealink T21 to FS

Head over to the default directory it will look like this one below , these are all the extensions in the default context which is like the CUCM internal partition

root@freeswitch-vm:/etc/freeswitch/directory/default#
root@freeswitch-vm:/etc/freeswitch/directory/default# ls
1000.xml 1002.xml 1004.xml 1006.xml 1008.xml 1010.xml 1012.xml 1014.xml 1016.xml 1018.xml brian.xml example.com.xml
1001.xml 1003.xml 1005.xml 1007.xml 1009.xml 1011.xml 1013.xml 1015.xml 1017.xml 1019.xml default.xml skinny-example.xml
root@freeswitch-vm:/etc/freeswitch/directory/default#

lets say we want to register under 1yealink_2nd_account

so nano 1000.xml and make the config line up with above xml

so it looks like this

<user id=”1000″>
<params>
<param name=”1000″ value=”$${default_password}”/>    (we set this in the yealink)
<param name=”vm-password” value=”1000″/>
</params>
<variables>
<variable name=”toll_allow” value=”domestic,international,local”/>
<variable name=”accountcode” value=”1000″/>
<variable name=”user_context” value=”default”/>
<variable name=”effective_caller_id_name” value=”Extension 1000″/>
<variable name=”effective_caller_id_number” value=”1000″/>
<variable name=”outbound_caller_id_name” value=”$${outbound_caller_name}”/>
<variable name=”outbound_caller_id_number” value=”$${outbound_caller_id}”/>
<variable name=”callgroup” value=”techsupport”/>
</variables>
</user>
</include>

but will end up like this
<include>
<user id=”1000″>
<params>
<param name=”1000″ value=”1000″/>
<param name=”vm-password” value=”1000″/>
</params>
<variables>
<variable name=”toll_allow” value=”domestic,international,local”/>
<variable name=”accountcode” value=”1000″/>
<variable name=”user_context” value=”default”/>
<variable name=”effective_caller_id_name” value=”Extension 1000″/>
<variable name=”effective_caller_id_number” value=”1000″/>
<variable name=”outbound_caller_id_name” value=”$${outbound_caller_name}”$
<variable name=”outbound_caller_id_number” value=”$${outbound_caller_id}”$
<variable name=”callgroup” value=”techsupport”/>
</variables>
</user>
</include>

 

then it will register to the fs

regsitered yealink

The most important concept in FS is ………

the basic construct is a dialplan which is simply a list of actions which is controlled by the digits dialled , a dialplan can be broken into contexts where each context is a group of extensions , each of which contains specific actions can be performed on the call

the dialplan processor uses regular expressions which is a pattern matching system to determine which extensions and actions to execute

<extension name=”example”>

<condition field= “destination_number” expression =”^(10\d\d\)$”>

<action application =”log” data=”INFO dialed number is [$1]”/>

 

This captures the digits dialed and matches them against 10\d\d so this is a range 1000 to 1099 , so if a user dials 1050 this would execute the application called log and print out the digits dialed to the screen , the value $1 would be interpolated or expanded

FS has 3 main contexts

  • default
  • public
  • features

The default context

This can be though of as the internal as it services users who are directly connected to FS

There are some typical PABX type extensions contained within conf/dialplan/default.xml the local extension does many things

  • routes calls between internal users
  • sends calls to the destination users voicemail  on a no answer condition
  • enables in call features with bind_meta_app
  • updates the local calls data base

 

The local extension 

<extension name=”Local_Extension”>
<condition field=”destination_number” expression=”^(10[01][0-9])$”>
<action application=”export” data=”dialed_extension=$1″/>
<!– bind_meta_app can have these args <key> [a|b|ab] [a|b|o|s] <app> –>
<action application=”bind_meta_app” data=”1 b s execute_extension::dx XML features”/>
<action application=”bind_meta_app” data=”2 b s record_session::$${recordings_dir}/${caller_id_number}.${strftime(%Y-%m-%d-%H-%M-%S)}.wav”/>
<action application=”bind_meta_app” data=”3 b s execute_extension::cf XML features”/>
<action application=”bind_meta_app” data=”4 b s execute_extension::att_xfer XML features”/>
<action application=”set” data=”ringback=${us-ring}”/>
<action application=”set” data=”transfer_ringback=$${hold_music}”/>
<action application=”set” data=”call_timeout=30″/>
<!– <action application=”set” data=”sip_exclude_contact=${network_addr}”/> –>
<action application=”set” data=”hangup_after_bridge=true”/>
<!–<action application=”set” data=”continue_on_fail=NORMAL_TEMPORARY_FAILURE,USER_BUSY,NO_ANSWER,TIMEOUT,NO_ROUTE_DESTINATION”/> –>
<action application=”set” data=”continue_on_fail=true”/>
<action application=”hash” data=”insert/${domain_name}-call_return/${dialed_extension}/${caller_id_number}”/>
<action application=”hash” data=”insert/${domain_name}-last_dial_ext/${dialed_extension}/${uuid}”/>
<action application=”set” data=”called_party_callgroup=${user_data(${dialed_extension}@${domain_name} var callgroup)}”/>
<action application=”hash” data=”insert/${domain_name}-last_dial_ext/${called_party_callgroup}/${uuid}”/>
<action application=”hash” data=”insert/${domain_name}-last_dial_ext/global/${uuid}”/>
<!–<action application=”export” data=”nolocal:rtp_secure_media=${user_data(${dialed_extension}@${domain_name} var rtp_secure_media)}”/>–>
<action application=”hash” data=”insert/${domain_name}-last_dial/${called_party_callgroup}/${uuid}”/>
<action application=”bridge” data=”user/${dialed_extension}@${domain_name}”/>
<action application=”answer”/>
<action application=”sleep” data=”1000″/>
<action application=”bridge” data=”loopback/app=voicemail:default ${domain_name} ${dialed_extension}”/>
</condition>
</extension>

<extension name=”Local_Extension_Skinny”>
<condition field=”destination_number” expression=”^(11[01][0-9])$”>
<action application=”set” data=”dialed_extension=$1″/>
<action application=”export” data=”dialed_extension=$1″/>
<action application=”set” data=”call_timeout=30″/>
<action application=”set” data=”hangup_after_bridge=true”/>
<action application=”set” data=”continue_on_fail=true”/>
<action application=”bridge” data=”skinny/internal/${destination_number}”/>
<action application=”answer”/>
<action application=”sleep” data=”1000″/>
<action application=”bridge” data=”loopback/app=voicemail:default ${domain_name} ${dialed_extension}”/>
</condition>
</extension>

 

Amazon Elastic Compute Cloud EC2

Compute basics

Compute is a measure of the computational power to fulfill your workload , EC2 allows you to acquire compute via images or virtual servers called instances

Instance types

The instance type defines the virtual hardware supporting the EC2 instance , they vary in the following dimensions

  • vCPU
  • Memmory
  • Storage ( size and type)
  • Network performance

The instance types are optimized as below

c4 Compute optimized
r3 Memmory optimised
i2 Storage optimised
g2 Graphics optimised

Enhanced networking

Some instance types support enhanced networking for greater network performance , it reduces the impact of virtualization on networking by enabling a capability called single root I/O virtualization SR-IOV , this gives more packets per second , less latency this will be picked up in VPC

AMI

The amazon machine image AMI is similar to an VM . OVA template , in as such it is a software definition at launch it defines

OS and configuration

Initial state and patching

Application and system software

There are four sources of AMI’s

  • Published by AWS
  • AWS marketplace
  • Generated from existing licenses generate an image from an existing EC2 container
  • Uploaded virtual servers use AWS VM import export service ,raw,VHD ,VMDK and OVA

Addressing an instance

There are several ways an instance can be addressed

  • Public DNS name when you launch an instance AWS creates a public DNS name and cannot be specified by the enterprise
  • Public IP the launched instance will have a public IP address from a pool reserved by AWS, it only persists while the instance is running
  • Elastic IP is an address that you can reserve and use for your instance from a pool.It is a public IP address , which persists until the customer releases it and is not tied to the instance 

Initial access

EC2 uses public key cryptography to encrypt and decrypt login information

Virtual firewall protection

AWS controls traffic in and out of the instance via a virtual firewall called a security group , which allow you to control traffic based on

Below are the security group attributes

  • Port
  • Protocol
  • source/destination identifies the far end of the communication , can be identified by CIDR block x.x.x.x/subnet or a security group

Security groups have different abilities based on their association with VPC or with EC2

Type of security group capability
EC2 Control outgoing instance traffic
VPC security groups Outgoing and incoming security groups

every instance must have at least one security group, but can have more

A security group is a default deny that is to say it will not allow any traffic that is not explicitly defined by a security group rule

In a case of multiple security groups , the aggregate of the groups is the rule , so if you allow RDP from x.x.x.x and HTTP from 0.0.0.0/0 in two different rules , than the EC2 instance will get both RDP traffic and HTTP traffic

A security group is a stateful firewall , so that ports can fix up, they are applied at the instance level rather than the perimeter  

Instance lifecycle

One of the benefits of AWS Ec2 is the ability to bootstrap , that is the ability to run code when the instance is launched

One of the parameters when an instance is launched is a string value called UserData , this string is passed to the OS at launch to be executed as part of the launch , on linux this can be part of a shell script, on windows this can be a powershell script it can do the following

  • Apply a patch
  • Enrol a directory service
  • Install application software

VM import /export

This allows you to import virtual machines from your enterprise ,you cannot export an AMI

Instance metadata

You can use instance metadata to control your instance , you can make a callto the OS without making a call to the AWS API , an HTTP call to http://169.254.169.254/latest/meta-data/ will return the top node of the metadata tree ,some of the attributes are shown below

  • Security group ids associated with the  the instance
  • Instance id
  • Instance type
  • AMI used to launch the instance

There is a lot more ! see documentation

Managing instances

When the number of instances starts to climb you can use tags to manage them , as you can with other AWS services , you apply ten tags per instance. Monitoring instances is done via Amazon cloud watch.

Pricing instances

You are charged for EC2 instances while they are in a running state, but the amount you are charged depends on the following 3 pricing options

On demand instances Price per hour on AWS website
Most flexible pricing option since the enterprise controls when the instance is running
Least cost effective price per hour of the three options
Reserved instances Make capacity reservation for predictable workloads
Can save up to 75% over the on demand hourly rate
To reserve enterprise needs instant type and availability zone and the duration of the reservation
Two factors determine the cost the commitment and the payment option
Commitment is the duration of the reservation, can be one or three years , the longer the commitment the bigger the discount
Payment options are
All upfrontPartial upfrontNo upfront
Spot instances For workloads that are not time critical and tolerant of interruption. The enterprise specifies a bid price, if the bid price is above the spot price , the enterprise gets the instance  
Instance will run until

  • Customer terminates
  • Spot price goes above the bid
  • Not enough unused unused capacity to meet the demand for spot instances

If AWS needs to terminate they will send a two minute termination warning

Architectures with different pricing models -EXAM !!!!!

So depending on the need you can have different pricing models per architecture

Website that does Christmas foods gets 20,000 hits per day in December , but at other times gets only 5000 hits per day so you could go on demand pricing for December , but then use reserved instances pricing  off peak

Tenancy options

the following tenancy options can help a client achieve security and compliance goals

Shared tenancy

This is the default tenancy model for all EC2 instances , so this basically means that in the AWS data center your sharing a blade with other customers

Dedicated instances

dedicated instances run on hardware that is dedicated to a single customer

Dedicated host

the physical server is dedicated to the instance, this can help meet licensing requirements . This differs from dedicated instances which can launch on any hardware dedicated to the account

Placement groups

A placement group is a logical grouping of  instances within an availability zone , so chatty applications that need to talk to each other can do so with low latency 10 Gbps  network, remember to exploit this you need to choose an instance type that supports advanced networking

Instance stores

provide temporary block storage for your instance , is physically attached to your server and is ideal for storing data that changes frequently . The keys aspect of instance stores is that they are temporary data in the instance store is lost when

  1. underlying disk drive fails
  2. the instance stops (data persists on reboot)
  3. instance terminates

Elastic block store

The limited persistence of block stores means that they are ill suited for more durable workloads, to get around this problem AWS uses Elastic Block Store , whose volume is automatically replicated within its availability zone . Multiple block stores can be added to an instance

Types of EBS volumes

There are several different types

Magnetic volumes

these have the lowest performance , and the lowest cost , they range in size from 1 GB to 1 TB and average 100 IOPS , but can burst to hundreds of IOPS  they are best used for

  • work loads where data is accessed infrequently
  • sequential reads
  • low cost is required

they are billed on size of disk not what is stored on them

General purpose SSD

Suitable for a wide range of work loads , they size from 1 GB to 16 GB and provided 3 IOPS per GB provisioned which is capped at 10,000 IOPS, so a 1 TB volume will provide 3,000 IOPS , a 5 TB volume wont give you 15000 IOPS because of the cap.They are suited for a wide range of work loads such as

  • system boot volumes
  • small to medium sized databases
  • development and test environments

Provisioned IOPS SSD

Designed to meet the needs of I/O intensive work loads , such as database workloads , they are the most expensive work load , they range in size from 4 GB to 16 GB .You can stripe multiple volumes together in a RAID 0 configuration, pricing is based on the size of the volume and the IOPS reserved. they can provide 20000 IOPS  and are suitable for

  • Critical business applications
  • large database work loads

 

Protecting Data

When working with instances you need to be able to perform back up and recovery of EBS volumes via snap shots. They can be taken in the following ways

  • AWS management console
  • Through the CLI
  • Through the API
  • a schedule of regular snap shots

snapshot is stored on system S3 , hence you need to use the EBS console to manipulate them

Creating a volume from a snapshot

to use the snap shot you create a volume from the snap shot , when you do this the volume is created immediately

Recovering Volumes

Since EBS volumes persist after an instance fails , it is straight forward to detach the volume and attach it to another instance so that the data can be retrieved.

Encryption volumes

when you launch an encrypted volume , amazon uses the KMS ( key management system) to handle the key management via industry standard AES-256 algorithm.  The encryption happens on the servers where the volumes are attached

Summary

  •  To launch an instance you must specify an AMI which defines the virtual hardware , memory  vCPU etc
  • spot instances are best for work loads that can tolerate interruption, reserved instances are are best for long term compute needs, on demand instances provide flexible platforms that can scale to need
  • on demand instances can be used to scale up a web application running on reserved instances in response to a traffic spike.
  •  Enhanced networking  enables you to get higher through put , lower jitter and latency
  •  –via public IPv4, elastic IP or public DNS
  • Data in an instance store is lost when the instance is stopped or terminated , though of course would survive an OS reboot
  • On demand instances require no up front commitment , and can be launched at any time  and are billed by the hour .Reserved instances require an upfront commitment .Spot instances are launched when your bid price exceeds the spot price, spot instances will run until the spot price exceeds your bid price , in which case you get a 2 minute warning .
  • Every instance type is rated for low medium or high , or 10 Gbps network performance, with larger instance types being rated higher , some instance types are network enhanced.
  • Metadata is information about an instance , such as instance ID , instance type , and security group, available over HTTP.
  • Security groups are virtual firewalls that control traffic in and out of your EC2 instance, they deny by default , and you can specify traffic direction , port protocol and destination address via CIDR block, they are stateful meaning that an outgoing rule will allow response without a corresponding incoming rule.
  • Magnetic volumes provide an average performance of 100 IOPS and can be provisioned up-to 1 TB.They are good for cold and infrequently accessed data .General purpose SSD volumes provide 3 IOPS / GB up to 10,000 IOPS , with smaller volumes able to burst 3000 IOPS . They can be provisioned up to 16 TB
  • Any volume type can bee encrypted at launch , encryption is built on AWS KMS and is transparent to attached instances
  • snapshots provide a point in time backup of an EBS volume and is stored in S3 , subsequent backups are incremental and they only store deltas

 

 

 

 

S3 101

Thats why its called S3 because of the three S’s,it is basically easy to use object storage, with of course a web front end.You pay only for the storage you use so it is dynamic , so capacity planning is no longer a constraint

Common uses are

  • Backup and archive for on premise or cloud
  • Content storage and distribution
  • Bi data analytics
  • Static website hosting
  • Disaster recovery

 

Storage classes are

 

  • General purpose
  • Infrequent access
  • Archive

 

Glacier is another storage service ,but it is optimised for Data archiving and long term backup, good for “cold data” where a retrieval time of hours is acceptable

 

Object storage Vs Block / File storage

In traditional IT environments two kinds of storage dominate

 

  • Block storage, operates at a low level and manages data as numbered fixed size blocks
  • File storage operates at a high level (OS) manages data as a hierarchy of files  

 

These two systems would be accessed over a network  in the form of a SAN using protocols Fibre Channel , but basically server and OS dependant

 

S3 is cloud based object storage , it is server independent and is accessed over the internet , data is managed via standard HTTP verbs

 

Each S3 bucket contains

  • Data
  • MetaData

 

Objects reside in containers called buckets , they are a simple flat file with no hierarchy in terms of a file system. A bucket can hold an infinite number of objects . You can only GET or PUT an object you cannot mount or open a bucket.S3 objects are automatically replicated within a region.

 

Buckets

A bucket is a container, and forms the top level namespace in S3

 

AWS Regions

 

Even though the name for a bucket is global, the bucket is created in the region that you choose, so you can control where your data is stored.

 

Objects

 

They are the entries that are actually stored in the S3 bucket. Data is the actual file itself, and Metadata is data about the file.The data portion is opaque to S3 , it doesn’t care about the actual data itself .The metadata of the object is a set of name/value pairs that describes the object.

 

Metadata breaks down into

  • System metadata  this is used by S3 eg date last modified, size , MD5 digest and HTTP content-Type
  • User metadata created when the object is created , you can tag Data with something meaningful

 

Keys

Every object in a bucket is identified by a unique identifier, it called a key it can be up to 1024 bytes of UTF 8, you can have the same key across two buckets but you cannot have identical keys within the same bucket. Bucket and key create a UID

 

Object URL

S3 is internet based storage and hence has an associated URL

 

http://mybucket.s3.amazonaws.com/jack.doc

 

S3 bucket name = mybucket

Key = jack.doc

 

S3 operations

  • Create / delete bucket
  • Write an object
  • Read an object
  • Delete an object
  • List keys in the bucket

 

REST interface

 

Basically all HTTP verbs form, the API

 

  • Create= HTTP PUT (sometimes POST)
  • Read= HTTP GET
  • delete= HTTP DELETE
  • Update = HTTP post

 

You interact with S3 via higher level interfaces rather than the REST direct these are

 

  • AWS SDK
  • JavaScript
  • Java
  • .NET
  • Node.js
  • PHP
  • Python
  • Ruby
  • Go
  • C++
  • AWS CLI
  • AWS Management console

 

Durability and Availability

 

Durability =  99.99999999%

Availability = 99.99%

 

Availability is achieved by device redundancy/ multiple devices within a region. Though this can lead to data consistency issues , since it takes time for updates to propagate to all new devices

 

Access control

 

To give access of a bucket to others

 

  • Coarse grained access controls S3 ACL’s (READ, WRITE FULL-CONTROL at object or bucket level (legacy)

 

  • Fine grained  access controls , S3 bucket policy , AWS IAM policy and query string manipulation, this is the recommended access control mechanism  

 

Bucket policies include an explicit reference to the IAM principal in the policy, which can be associated with a different AWS account.Using bucket policy you can also specify from where the S3 is accessed eg IP address and also at a particular time of day.

 

Static Website Hosting

 

This is a very common use for S3 , if there is no server side scripting required (PHP,ASP.NET or JSP). Because an S3 bucket has an URL it’s easy to change it into a website.

 

  1. Create a bucket with the same name as the desired website hostname
  2. Upload static files to the bucket
  3. Make all files public (world readable)
  4. Enable static website hosting for the bucket.This includes specifying an index document and an error document
  5. The website will now be available at the S£ website URL <bucket-name>.s3-website-<AWS-region>.amazonaws.com
  6. Create a friendly DNS name in your own domain using DNS CNAME or Amazon Route 53 alias that resolves to the amazon website   URL
  7. The website will now be available at your website domain name

 

Prefixes and Delimiters

 

This provides a way to access objects in a bucket with hierarchy for example you may want to save some server logs by

 

log/2016/january/server 42.log

log/2016/february/server 42.log

 

All of the access methods (including AWS console) support the use of delimiters as above . This technique used in conjunction with bucket policy allow you to control access at the user level.

 

Storage Classes

 

The range of storage classes are

 

standard High durability  and availability, low latency with high throughput
Standard infrequent access Same as standard , but for colder data, eg more long lived , less frequently accessed Lower GB per month cost than standard, minimum object size is 128KB, minimum duration is  30 days’s so use it for infrequent access for data that is older than 300 days
Reduced redundancy storage

RRS

Lower durability ( 4 nines) Reduced cost than standard
Glacier Low cost ,no real time access, retrieval time of several hours Controlled via S3 API copies to RRS

 

Object lifecycle management

 

Data traditionally can be thought of going from left to right

 

Hot

Frequent access low latency
Use S3 standard

Warm

Less frequent

30 days +, use standard IA

Cold

Archive

90 days move to glacier

Deletion

After 3 years delete

 

You can use S3 lifecycle configuration rules to move data

 

Encryption

 

in flight to S3 data is encrypted via HTTPS (to and from S3)

 

At rest you can use several variations of SSE ( server side encryption) as you write the data to S3 you can use Amazon key management service (KMS) use 256 bit  advanced encryption standard (AES)

 

You can also use CSE (client side encryption) in the enterprise

 

SSE-S3 (aws managed keys)

 

Check box encryption where AWS handles the following for S3

 

  • Key management
  • Key protection

 

Every object is encrypted with a unique key, which in itself is encrypted by a separate master key, which is issued monthly with AWS rotating the keys

 

SSE-KMS (AWS KMS Keys)

 

Fully integrated service , where AWS handles key management and protection, but the enterprise manages the keys .It has the following benefits

 

  • Separate permissions for using the master key
  • Auditing see who used your key
  • Failed attempts from users who did not have the right permission to decrypt

 

SSE-C(customer provided keys)

 

Enterprise maintains its own encryption key, but doesn’t manage client side encryption library

 

Client side encryption

 

Encrypt on the client side before transmit, you have two options

 

  • Use AWS KMS managed customer key
  • Use client side master key

 

When using client side the enterprise retains E2E control of the encryption including management of the keys

 

Versioning

 

Helps against accidental deletion of data , by keeping multiple versions of object in a bucket . versioning is activated at the bucket level ,once on it cannot be removed.

 

You can restore an object by referencing the version ID in addition to the bucket ID and object key

 

MFA delete

 

In addition to normal security credentials MFA delete requires an authentication code (temporary one time password).it can only be enabled by the root account (key generated by a virtual MFA device )

 

Pre signed URLs

 

By default objects are private meaning that only the owner has access,but the owner can create a pre signed url which will allow time limited permission to download object’s .key created using

 

  • Owners security credentials
  • Bucket name
  • Object key
  • HTTP method ( GET for download)
  • Expiration date
  • Time

 

Gives good protection against web scrapers

 

Multipart upload

 

AWS provides a multipart upload API for larger files .This gives better network utilisation by virtue of parallel transfers , supports pause and resume , and the ability to upload where the original size is unknown

 

Range GETs

 

The range of  bytes to be downloaded is defined in the HTTP header of the GET , useful if you have poor connectivity and a large object to download

 

Cross region replication.

 

Allows replication of new objects in a bucket in one AWS region to another AWS region.Metadata and ACLs associated with the object is alo part of the replication. Versioning must be turned on in both source and destination buckets , and you must use an IAM policy to give S3 permission to replicate .

 

Commonly used to reduce latency required to access objects .Existing objects in a bucket are not replicated when  it’s turned on this is achieved by a separate command

 

Logging

You can enable S3 access logs to check requests made to the bucket, when you enable you must choose where the logs will be stored , it can be the local bucket or another bucket, its good practice to define a prefix such as your bucket name / logs.They include the following information

 

  • Requester account and ip address
  • Bucket name
  • Request time
  • Action (get , put, list)
  • Response status error code

 

Event notifications

 

When actions are taken on an S3 bucket ,event notifications provide a mechanism, where you can perform other actions in response to the change for example transcoding media files once they are uploaded

 

Notifications are set up at the bucket level , and can be configured via the S3 console , or REST API or by the SDK.

 

Notifications can be sent through SNS ( simple notification service) or SQS (simple queue service) or delivered to AWS Lambda to invoke lambda functions.

 

Best practice , patterns , performance

 

Common pattern is to backup enterprise file storage to an S3 bucket in a hybrid deployment. If you are using S3 ib#n a GET intensive mode , you should use cloudfront as a caching mechanism for the site / Bucket.

 

Amazon glacier

 

Low cost archive storage service with a 3-5 hour retrieval time  for the data

 

Archives / vaults

 

Data is stored in archives and can contain up to 40 TB  of data , and you can have an unlimited amount of archives.Vaults are containers for archives, each AWS account can have up to a 1000 vaults, they can be controlled via IAM policies or vault access policies

 

Data retrieval

 

You can retreive 5% of your data for free each month

 

Glacier vs S3

 

Glacier S3
40 TB archive 5 TB object
System generated archive ID Choose bucket name
Auto encryption Encryption at rest optional

 

How to remote a Yealink phone

Back in the day it used to be difficult to remote control phones , usually that phone would have to have a CTI layer and you would have to use JTAPI or a java telephony API to send 3rd party call control messages to control

 

but since phones have become web servers in their own right we can exploit HTTP verbs to control the phone

 

the set up for this is shown below

 

2019_11_08_15_11_37_Yealink_T21P_E2_Phone_RC

 

Basically we are going to set 2 settings

Action URI Allow IP list – to leave it wide open we can set this to “any”  or we can define a sub net here

CSTA Control – set this to enable this is basically ACD protocol speak

once this is set than the phone will go down for a reboot

once it has come back you can issue something like this for the phone

https://192.168.5.201/servlet?key=number=07590983600&outgoing_uri=033333333.

When you send this the browser will use the GET verb to send

RCphoneresp_get

for the first time you send the get the phone will display a message that say do you want to allow remote control , but it only does this the first time , for subsequent requests this will no longer be presented

 

rcmessage

 

you can also do a screen grab

 

https://192.168.5.201/screencapture

 

RC_phonescreencap

so this will provide support function with a useful tool in being able to generate outbound calls from site

UCCE v 11.0 over AWS

Laying the foundation to integrate Cisco contact center with AWS connect

This blog covers how to create a UCCE environment hosted on the AWS free tier and is broken down into 3 different stages shown below. I am not building this with a production hat on rather a proof of concept hat on , so the configuration here is not  best practice but will get you to Proof of concept quickly.

  1. staging of ICM v 11.0 on EC2 instances in a simplexed architecture within a single availability zone .
  2. staging of ICM v11.0 on EC2 instances with duplexed architecture within a single availability zone
  3. staging of ICM v11.0 on EC2 instances with duplexed architecture using two different availability zones

This blog covers the 1st stage which is an ICM v 11 core built as a simplexed progger.

The software platform to be virtualized is an AD integrated windows 2012 R2 / SQL 2014 network (64 bit)

Windows 2012 R2 AD 101

We are  going to use the Amazon free tier to create a primary domain controller / EC2 instance To do this boot the Win 2012 R2 AMI native and connect to it via RDP. Then promote the server to a PDC via the server manager . 2012 Have wrapped it into the server manager page  , i built this a few weeks ago so i dont have any slides ,  it is simple though and pretty much vanilla .The domain i created was corp.ucce. Then created an account that  can be used to log the UCCE server into corp.ucce prior to staging of the ICM software .(set this password to do not expire)

ucce_domain_account

AWS configuration for UCCE member Server

log into your EC2 and launch the 2012 R2 AMI , and accept all the defaults for now, you should end up with an EBS based instance

 The root device type really is a statement about the storage of the root device , in the case of ebs this is a Amazon volume taken from  an Amazon snapshot .

Below is the other type of storage the “instance” backed store , see its ephemeral that means that after you have turned the instance off , your data disappears

instance backed

 EBS backed storage  uses physical  volumes which means that your data will persists, even if you cycle your instance , which suits us on UCCE
 ebs_structure
The virtualization type hvm , refers to the type of virtualization used Amazon use either PV para virtualization or hvm which is hardware virtual machine the HVM comes with everything you need to run , where with PV you have to mess round with Grub to load your image

Increasing storage of ec2 instance work-around

We have to change  the size of the disk from 30GB to at least 60 GB , since 30 GB drive wont be able to support the Cisco ICM v11.0 software and the MS SQL 2014 environment. You cant do this at the provision storage page of the instance , when you first boot the AMI since it invalidates your free tier and it wont spin, but you can use the following work around to get yourself a bigger disk on the free tier via the elastic  Block storage application .

Elastic block storage modify volume

in the elastic block storage console select the volume for the instance you just created and select action , and then modify this will then spawn the box below

then change to 60 GB disk and select modify this should then spawn the following

modify volume confirm

what a nice message from the Amazon engineers , so you will have to log into the instance after and then use disk tools to grab the extra space , if you monitor elastic block storage you will see the volume optimizing. We need to let this process complete before rebooting the instance and then reformatting the drive

Increasing volume on 2012 R2 Host (UCCE member server)

Log into your instance using the public IPV4 address of the instance you just created , you will use the root Administrator account for this (dont forget to put your rule into the security group for RDP access )

The head for the computer management snap in and select Disk management you should see the same GUI as shown below;

modify volume host

So you can see that we got the extra 30 GB via elastic block storage , even with the free tier restriction 😉 then right click on the C drive and select to extend the volume this should then spawn the confirmation below

modify volume host_confirm

Windows 2012 R2  DNS configuration  for UCCE

so while we have an RDP session open to our EC2 UCCE server we may as well promote this into the AD domain we created earlier . Currently this server sits in an work group.To get the UCCE server to pick up its domain controller , the easiest way is to point the DNS of the server to the private IP address of the PDC. After this Pinging corp.ucce should resolve the private IP address of the PDC.

ucce_dns

the change i made was here

DNC change

Promoting UCCE server to corp.ucce Domain

Using the windows 2012 server manager on the UCCE member server select the work group hyperlink  and then promote to the domain corp.ucce

domain_change

If the DNS resolves this should pick up the private IP address of the domain controller , you should then ask to be authenticated use the account details you created in the AD 101 section and you should be able to add the  domain after a reboot .

RDP to UCCE / EC2  @ corp.ucce

Use the credentials from AD 101 section when you need to log into the UCCE member server and of course use the public IPV4 address for the RDP session ,  or else your domain manager software (part of UCCE) will not find the corp.ucce domain.

doamin_creds

Then the server manager should say your a member of corp.ucce

Exectute ICM software via S3

i used an S3 storage to store all the software i would need for the build , in this way i could browse the software i would need from the member server , and take advantage of the 1.4 mega bits per second that you tend to get within an availability zone. Also the other cool thing about windows 2012 is that it allows you to  mount direct from the .ISO

icmsoftware

so run the ICM base software first by running setup.exe in the folder highlighted. The setup should complete without incident and produce the GUI below

icm_build_complete

reboot the instance as suggested .

Running the UCCE Domain controller Applet

After the reboot connect to the EC2 instance via RDP. On the desktop now there should be a folder called , unified CCE tools , from this folder run the domain manager , this will use the PDC to bring down the OU structure of the domain to the nUCCE server

domainmanager

Staging the SQL server on nUCCE / ec2

Browse to the bucket where your SQL 2014  ISO is stored on S3 from nUCCE/EC2 instance and mount the ISO , then run the setup .exe and you should see GUI below

sqlserver fresh build

dont worry about the planning link for now but select the installation link and select the New SQL Server link, we are also going to use the evaluation version of SQL , which will give us 180 days worth, this can then be extended for another 180 , pretty much accept all defaults until;we get to the feature selection.

sqlfeatures

select the options shown , though for the build to complete we need to add the .NET framework 3.5 service pack 1 is required , this can be activated on the member server as a role, without doing this you will hit the issue below

dotnetbug

Adding .NET role to nUCCE / ec2 via 2012 role manager

using the server manager app , underneath the manage key , then select the .net Microsoft framework

dotnetsol

then after selecting next the install should progress as shown below

dotnetrunning

close the application after the the .NET feature has been installed. Then you should be able to re run the setup and pass the test. At the instance screen set below follow accept the default SQL instance sqlinstance

At the server configuration tab make the SQL services run under the NT AUTHORITY SYSTEM account and make all start up types automatic

serverconfig.png

then under the collation tab make the following changes and click apply

collationoptions

Then under the database engine  configuration page choose mixed mode , a password for the SQL SA account and add current user and administrator user

mixedmode

then we are ready to install

install

then reboot the nUCCE instance in ec2 dashboard. I have breezed through the setup for SQL contact center a bit , but if you want a much more in depth view of the setup follow the link below

https://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cust_contact/contact_center/icm_enterprise/icm_enterprise_9_0_1/installation/guide/UCCE_BK_S5AE1622_00_staging-guide/CCMP_BK_S5AE1622_00_staging-guide_chapter_01011.html

Create a snap shot via Elastic block storage

Now is a good time to take a snap shot of the nUCCE /ec2 server , since it has fresh AD, WIN 2012 , SQL 2014 and the basic ICM install software , but the final app has not been built yet and there is no contact center state , to create a snap shot follow the steps below

createsnap

go to elastic block storage via the management console and select the volume ID for the nUCCE server right click and create snapshot , then under the snap shot you should see the back up created

snapshot

SQL post install tweaks

This is the final stages of hosting the ICM part of a UCCE  architecture on AWS , but we have to make the following tweaks in SQL config so we can get all the contact center components to build correctly  .

sql-named pipes

using the SQL server configuration manager , for the client protocols move the named pipes above TCP/IP.Then under the SQL server network configuration enable Named Pipes.

Building the UCCE Components

On the desktop of the nUCCE server there should now be a unified ucce web set up short cut on the desktop which you can use to build the components it will take you to the web page below

https://localhost/setup/login.action

weblogin

login using the credentials of the account we set up in AD 101. This tools allows us to build the following ucce components

  1.  ICR intelligent call router
  2. Logger
  3. Admin workstation

These 3 component create the “brain” of the UCCE call center which allows you to make call routing decisions  , in real time against a contact center state .

Adding the nUCCE instance

The first thing we need to add is an instance , which is basically the idea of a customer and assign an integer as shown below, the link to this is under the instance management tab in the left hand frame

instance

Adding the nUCCE Logger

Add the logger under the logger tab except the defaults here

logger.png

and except the defaults at the next tab

loggenet.png

the host name of the server gets filled in by default and this resolves to the private address of the EC2 eg

domain_ping

Then at the next tab enable the outbound option , this later will be used to make outgoing SIP calls from nUCCE

outbound

Adding the nUCCE Router

We are going to basically follow the same process here to add the router , so on your desktop click the unified CCE web setup icon as before , use the credentials for your domain user and then from the left hand side click Router , accept the defaults shown below

router1

Basically select side A and select Simplex and then click next;

router 2

As before the private of your EC2 instance resolves to  the private IP address , this can be resolved via a ping.

router3

This will match with what is on your EC2 instance desktop and Ec2 dashboard

router4

and via your EC2 dashboard

router 5

at the next router install page enable 4 x PG ‘s as below

router 6

The PGs essentially provide us with a CTI interface to integrate third parties to.

router 7

Hopefully we can use lambda via connect to make a routing decision in cisco , hence the reason for enabling database routing option in the router. At the next screen accept the defaults for QoS , since essentially we are simplexed these will have no effect

router 8

Why did my private address for my EC2 instance change ?

That completes the build of the nUCCE Router. The more eagle eyed amongst you will see that the hostname changed between the router installation and the logger installation . This is only because i managed to kill my original install and had to re-stage , but the main point to take away is that in simplex , within a single availability zone for your EC2 instance ,  we are going to run all the services via a single private IP address , so your hostname will be the name provided by AWS when you launch the AMI

Building the nUCCE admin workstation.

Now that we have built the routing core within AWS , we need to build the administration interface so we can configure nUCCE / AWS . This basically follows the same steps as before , logging into the web interface on the desktop of your EC2 instance and then taking the following configuration options.

aw1

then select the options below

aw2

then finally as below

aw3

since there is no duplex in this architecture , we dont need a secondary site , but since it’s a required item we can just use a placeholder of secondary to satisfy the install .

The network interfaces for the AW is defined as below

aws4

again we are going to use the hostname of our EC2 instance which resolves to the private address EC2 instance.

Summary

That basically concludes the build , what we have done is implement the core intelligence of the Cisco UCCE v 11.01  product over AWS.

if you click the unified CCE service control applet on the EC2 desktop  you should see the services built as below

post1

dont worry about the PG / CG and dialer nodes (you will have the router , logger and distributor nodes) , i will blog the build of these components in a later blog , where we will use them to pass calls between connect and UCCE.

If you have any questions , or need help or assistance in building your own UCCE v 11.0 sandbox over AWS please dont hesitate to reach out to me on [email protected].