Quantcast
Channel: Blog - CoreBlox
Viewing all 53 articles
Browse latest View live

Identity and Access Management: 7 Considerations When Your Workforce Suddenly Needs To Work From Home

$
0
0

The need for social distancing due to the COVID-19 Coronavirus is critical in stopping its spread. Many businesses have heeded that advice by allowing employees to work from home. While some companies may already have policies allowing work from home, many only support limited access for key personnel. There are several areas where your Identity and Access Management services may be impacted by this location change to your workforce. 

WFH Header.png

The following seven items should be considered for your IAM infrastructure:

1.     External Access Increases the Need for Federation

SAML - #1.png

Many organizations rely on “being on the network” (internal) for access to many resources. While things like VPN can provide remote access to on-premise applications, cloud-based services may require extra hand-holding. Users that cannot access the VPN, leverage shared workstations, or have other limited access can use cloud-based applications to continue work functions. These workers may also not exist in your core directory and need to be authenticated against other sources. Leveraging federations protocols, like SAML, can ease moving these workers to externally hosted applications by eliminating the need to manage and remember additional ID’s and passwords.

 

2.     Scaling Up Your MFA

MFA - #2.png

Multi-Factor Authentication (or MFA) becomes more critical for external off-network users while working from home. While an ID and password may be sufficient internally, moving your workforce remote requires more sophisticated and secure authentication mechanisms. Push notifications, emails, SMS, voice or other channels can be leveraged for the additional credential. This infrastructure needs to quickly scale as more workforce needs to access applications off the network. Having thought-through MFA policies and infrastructure ensures that you are ready for this transition. MFA should be leveraged for both network access through VPN and also for access to cloud-based or externally-facing applications.

3.     Authentication Policies Become Key

Authentication Policies - #3.png

Being able to define authentication policies based upon risk analysis ensures that the user is challenged for appropriate credentials. A simple ID and password may be required when on-network or for low risk applications, but when the user is accessing a server or an application with sensitive information, step-up authentication is required. Using risk-based analysis, other data points can also be used to determine if an additional credential is required. Perhaps the user is accessing the application from a new network or at an unusual time, that user should be prompted for an additional means to validate that the user is correctly identified. This works well with the MFA solution identified above.

 4.     Flexible Authorization Services Reduce Time For Needed Access Changes

Authorization - #4.png

Temporary changes to application authorization policies may be required. Integrated authorization solutions or services allow for centralized changes to access policies, which limits the need to make application changes. Applications that need to allow different user constituencies or allow access from new locations may require changes to policies to allow access. For example, contingent workers may normally not require access to the HR portal and access is restricted. However, that portal becomes the mechanism to distribute information about corporate status. These workers need to be granted access which requires changing the portal authorization policies.

 

5.     Role Management Simplifies Changes to User Privileges

Roles - #5.png

Roles can be used to drive authorization decisions and support the changes identified above. These can be VPN or application access roles and can also drive decisions on provisioning user objects and role membership. Having a solution for role automation and the processes defined and documented for what changes are required allows for flexible automation of needed changes. This ensures that you can rapidly adjust to business demands. Since many roles are defined by directory groups, membership in those groups can be quickly assigned when needed and then revoked once the emergency has ended. This also ties into compliance systems and processes which supports future attestation for resource access.

6.     Self-Service Saves the Day

Self-Service - #6.png

On-boarding additional users requires both the processes and tools to be deployed that allow users to register, reset passwords, update account information, unlock accounts and provide other self-service functions. Self-service not only minimizes help desk load, but also ensures that users can active the appropriate credentials and register for access. Undertaking a large effort to distribute MFA tokens during an emergency is not an attenable solution. Self-service can then be integrated into your provisioning systems to handle assigning registered users roles, distribute meta data to applications, and to potentially host the forms used for self-service.

 

7.     Mobile Application Support Might Be Required

Mobile - #7.png

A remote workforce requires access to applications that support the needed job functions to be productive. These applications may require additional protocol support for user authentication, authorization and profile information. Protocols like oAuth, OIDC, DSML and others allow for mobile applications to access these services. Modern IAM solutions provide support for these and other protocols and can be leveraged as a gateway for access to identity services. This also allows for both service and user authentication, authorization, and consent.



Self-Provisioning In a Remote World

$
0
0

Our lives have been changed by the challenges we face due to the  COVID-19 coronavirus. Long-standing corporate procedures must adapt to these new rules through enhanced technical solutions and changes to processes and policy. Now that new groups of employees are working remotely, the ways that historically worked to enable these former on-premise employees must change. Companies no longer have an easy way to provide centralized services for provisioning personal computers, standardized images, or account registration. Small to midsize companies are looking at ways to allow employees to self-provision without requiring IT involvement to deploy a standard image, setup the machine, and then either ship it to the end user or require personal pick-up of the device.

Self-provisioning has multiple meanings. From self-service identity registration to provisioning of development infrastructure, the key is that you are putting power and control in the hands of the users. Historically, terms like provisioning have been tied to management of user identities, but this now needs to be extended to all the tools used by employees based upon job function. The processes for enabling users to perform these tasks need to be put in place to not only automate self-service provisioning, but also to securely expose these services to the public internet.

Companies like Microsoft and VMWare have solutions that allow companies to remotely deliver standardized installation and configuration of remote devices. Depending on the tool, different capabilities are available for off-network provisioning of computers and laptops, but the key is to allow employees to acquire their own devices and automated the process of configuring those devices with the corporate standardized software and configuration. Cloud-based management allows organizations to configure devices remotely with things like remote updates, install of corporate applications, and configuration of security policies for employee-procured devices.

Header Image.png
Setup+MFA

However, Identity and Access management services need to be in place to support these self-provisioning processes. This includes handling the initial identification of the user, creation and provisioning of user accounts, and securing access to the provisioning systems. These systems can be integrated with solutions that secure authentication with technologies like multi-factor authentication (MFA) and single sign-on (SSO).

 

The following diagram highlights a sample workflow:

Self-Service Workflow
  1. The user acquires a device from a local store (e.g. the Apple Store or Best Buy).

  2. The user enrolls through an public facing site based upon a secure set of factors known only to the user. Enrollment includes things like creation of a password, setting profile data, and management of other security data.

  3. The user is associated with the defined network identity configured for that user (typically in the HR system). That identity is then provisioned into the corporate user repositories. Roles automatically assigned to the user control provisioning targets and infrastructure access. Federated Identity solutions can be leveraged to create a centralized global profile for the users based upon multiple backend repositories. 

  4. Once the identity is created the device is enrolled for MFA, prompting the user to create MFA credentials if they do not already exist. MFA is critical for ensuring that access to the provisioning solution is secure.

  5. After the user is fully enrolled and their accounts have been created, the device is configured by the provisioning solution. This includes any required software and updates defined by the corporate standards.

A self-provisioning solution minimizes the need for IT involvement and speeds on-boarding of new users. The solution also simplifies distribution of configuration, corporate applications, and updates without requiring users to come to a central location. Although implementation of such a solution has some inherent complexity to implement, once deployed, users working remotely can be easily managed without the overhead of legacy processes. Keep in mind that an internet connection is required for this solution.

Solution: Delivering a Scalable Infrastructure for Your Remote Workforce

$
0
0

The last couple of blog articles have focused on some of the remote workforce challenges and recommendations for responding to COVID-19. CoreBlox has partnered with Ping Identity to deliver a cloud-based single sign-on (SSO) and multi-factor authentication (MFA) solution to allow your remote workers to continue being productive. Details on this offering can be found at https://www.coreblox.com/offers.

Here are a few things to think about to best take advantage of the offer:

1. Minimize Complexity

Your workforce may not have experience working for home, and adding more complexity or passwords for accessing needed resources only compounds the challenges. Plan a strategy that continues to move toward your security objectives, while ensuring incremental benefits. Technologies like SSO and MFA can go a long way in simplifying access and better securing the off-network experience. Approach the effort by securing a combination of high value applications and quick hits. This shows progress and also helps to balance helpdesk load. 

2. Add Security Not a Headache

Processes like intelligent risk-based authentication ensure that users are authenticated at the appropriate level for the resource being accessed. Prompts for step-up authentication should be based upon risk evaluation. Your long term goal should be to deliver risk-based MFA for as many systems as possible, but don’t wait for a “big bang.” Deploy MFA to critical systems like VPN connections first. Pair the delivery with self-service tools to simplify the enrollment process. Also, don’t risk authentication burnout with complex authentication processes that do not take risk into account. Solutions from companies like Preempt provide a Ping integrated solution for risk analysis and authentication policy definition. 

3. Single Sign-On Makes Workers More Productive

SSO technologies have grown significantly since their initial introduction. What started as simple on-premise web-based SSO now extends integration to cloud providers and may include securing applications that include both on-premise and cloud-based components. It takes time to enter a password every time an application is accessed. Marrying SSO with technologies that eliminate passwords and evaluate risk delivers increased security while reducing the number of passwords needed and centralizes the management of credentials.

4. Provide a Centralized Jumping Off Point for Corporate Resources

Working from home is not only be isolating, but also complicates locating the resources needed to do your job. Look to provide a central portal that links job function to needed applications and tools. This can include things like HR or CRM access, links to the internal corporate wiki, or even access to collaboration tools. Centralizing access ensures employees have a single location for all needed information. SSO into linked applications improves productivity and reduces support calls.

5. Over Communicate

Security projects can be perceived as providing limited value to those outside of the security field. You are making people learn new processes, authenticate in different ways, and access resources with which the users may not be familiar. It is better to communicate more often than to only send notifications for something that has already been implemented. Set the stage for what is coming, tout the benefits of improving your SSO and MFA infrastructure, and celebrate small victories. Security projects may be behind the scenes, but implementation of these initiatives can have very visible implications. Try to get as many users on-board as possible as early as possible. People are willing to change if they understand the benefits. Making the process easy to use is also never a bad thing.

Keeping these factors in mind will help to ensure that you are making working from home as secure and productive as possible. Remote access delivered with forethought and the right tools minimizes risk, improves access, and reduces IT overhead.

Single Sign-On Between SiteMinder 12.8 & ForgeRock 6.5

$
0
0

This blog post describes how to integrate SiteMinder and ForgeRock. Bi-directional single-sign-on between SiteMinder and ForgeRock is achieved, so that both environments can co-exist during migration. Medium to large size businesses will find the ability for these two solutions to co-exist very useful. It reduces burden on application and operation teams, therefore providing flexibility during the application migration timeline. It also brings the least impact to end users.

Solution Description

A request with a valid SiteMinder session to the ForgeRock environment will result in an automatic creation of a ForgeRock session. Conversely, if the request comes to the ForgeRock environment first, a post authentication plugin will create a SiteMinder session using a custom Authentication Scheme provided by ForgeRock. This Authentication Scheme uses the standard interfaces provided by SiteMinder. Hence, the ForgeRock-provided plugins ensure seamless single sign-on between the two environments. As a matter of fact, the end user doesn't really know which environment they are in.

Solution Components

  • ForgeRock Access Management 6.5.2

  • ForgeRock Identity Gateway 6.5.1

  • CA Single Sign-On / SiteMinder Policy Server 12.80

  • CA Single Sign-On SDK 12.80

Solution Overview

Solution Overview

In the SiteMinder environment:

• ForgeRock Authentication Scheme: used by SiteMinder to validate ForgeRock OpenAM token

• Sync App: a SiteMinder protected resource used to receive ForgeRock SSO token

In the ForgeRock environment:

• SiteMinder Authentication Module: used by OpenAM to verify SiteMinder session

• Post Authentication Plugin: sends OpenAM SSO token to SiteMinder upon successful authentication

forgerock_to_siteminder.png
  1. User requests to access FR protected application first

  2. IG intercepts the request and redirects the browser to AM for authentication

  3. AM authenticates the user, creates a FR SSO token

  4. Post authentication, AM sends FR SSO token to SiteMinder

  5. SiteMinder creates a SMSESSION cookie if FR SSO token is valid

  6. SiteMinder sends back the SMSESSION cookie to AM

  7. AM sends back both of the FR and SM cookies to the user

siteminder_to_forgerock.png
  1. User requests to access SM protected application first

  2. SM creates a SM SSO token, and sends back to the user

  3. User requests to access FR protected application

  4. SM Auth Module configured in the AM authentication chain detects the existence of a SMSESSION cookie

  5. SM Auth Module validates SMSESSION cookie with SiteMinder using standard SM API

  6. If the SMSESSION cookie is valid. Authentication completes. AM creates FR SSO token

  7. AM sends back both of the FR and SM cookies to the user

Conclusion

This blog post describes the technical details on co-existence between SiteMinder and ForgeRock. This type of solution can help your IAM modernization journey be seamless. It supports the latest ForgeRock AM version 6.5. Let Coreblox help catapult your business to the next generation of IAM platforms.

Ref:

1. Github OpenAM-Connector-for-SiteMinder Project for OpenAM version 9.5 & 11.0 https://github.com/ForgeRock/OpenAM-Connector-for-Siteminder

2. ForgeRock Migration Guide: CA Single Sign-On (Siteminder SSO) to ForgeRock Identity Platform https://www.forgerock.com/resources/overview/migration-guide-ca-sso-forgerock

3. The Top 3 Integration Approaches to Migration from Oracle Access Manager (OAM)

Building a Federated Data Caching Appliance

$
0
0

The release of the Raspberry Pi 4 with a quad core processor and 8GB of memory opens up new possibilities for enterprise level applications on a small form factor. At $75, multiple boards can be purchased and incorporated into an appliance form factor. By clustering the boards you can achieve enhanced performance and improved availability.

One use for such an appliance is what I call a Federated Data Caching Appliance. This drop-in appliance allows you link information from various data sources together, build views into the data based upon a schema you define, cache the information for quick retrieval and surface the views in a variety of different protocols. I have based this on technology from Radiant Logic, but other technologies can be substituted.

Imagine taking data from your HR, CRM and inventory systems and joining the information into a common view. What insights could you gain from that information? How could your applications leverage that data? How about building a view that linked a salesperson, his or her manager, his or her vacation schedule, what the salesperson has sold and the inventory available of those items. With that information, alerts could easily be generated for a manager when a client is running low on a product and inventory is available and the salesperson covering that client is out on vacation. It's a complicated scenario, but any data that can be pulled together and correlated can then be made available for consumption by applications. By separating the view from the physical representation, you have complete control over how the data is represented and made available through multiple protocols.

Radiant Logic's solutions provide the following capabilities:

With its sophisticated methods allowing you to quickly link to underlying data sources, define a schema for the data, join it, and deliver the data through multiple protocols, the technology provides a good engine for the Federated Data Caching Appliance. Additionally, the solution supports clustering for high availability and scalability. The solution requires three servers, but more can be used.

This appliance could be designed to house three (or more) Raspberry Pi's into a single highly-available device at a low cost point. By adding a second appliance you gain external high-availability as well. With the web-based administration and dashboards available in FID, a UI for managing the appliance could be quickly created:

The appliance could be designed something as follows:

The appliance has three Raspberry Pi's for the FID cluster which are powered over ethernet. The box also has redundant power. Two of the units would be deployed for high-availability.

Granted, there are some challenges to this approach. A build of FID that runs on ARM Java would have to be made available. Additionally, the default microSD-based storage would have to be replaced with something more scalable. However, this is an interesting experiment.

Have fun!

Step-by-Step Build of a Federated Data Caching Appliance: Part 1 - Overview of the Components and Their Assembly

$
0
0

A wise colleague once told me, "If there is something that can take any data, build a schema, and lets you mount it somehow, it's going to have many use cases." That sent me down the path of looking at ways to easily surface important details and to make querying that data responsive. By dynamically generating the needed information instead of building static representations, you can quickly integrate this data into other systems and can modify it on the fly without needing to change the underlying systems. I described this type of solution in my previous blog article, "Building a Federated Data Caching Appliance."

With the release of the 8GB version of the Raspberry Pi 4, there seemed to be an opportunity to build a low cost solution based around those principles.

Table of Contents

There are of course many options for building such a device. Keep in mind that this is 100% unsupported by Radiant Logic as it is not a supported platform.

 I am breaking this out into four articles:

  1. Overview of the Components and Their Assembly

  2. Base Install and Configuration

  3. Radiant Logic Install Instructions

  4. Implementing the Use Case

There are many ways to do this. I have chosen these steps to make things easier for me. Please keep that in mind as you review these instructions.

I would like to recognize an article called “Getting Started with Raspberry Pi 4” by Crosstalk Solutions that helped get me started with the Raspberry Pi and its configuration.

Components

Before getting too far into this, I wanted to list out the various components I used in this proof-of-concept. I decided to simplify things by using Power Over Ethernet (PoE) instead of plugging in the Raspberry Pi’s. This made it easier for me to manage the jumble of cords I needed. If you do not have a switch with PoE capabilities, be sure to use a power supply instead.

A great way to get started with all of the needed components is by leveraging CanaKit’s Raspberry Pi 4 Starter Kits. I highly recommend them. I have no affiliation with the company. This gets you going with everything you need.

So, onto the list of equipment. 

2. equipment overview.png

I used the following items as part of this build:

  1. 3 x Raspberry Pi 4 8GB - https://www.canakit.com/raspberry-pi-4-8gb.html

  2. 3 x Power Over Ethernet Hats - https://www.canakit.com/raspberry-pi-poe-hat.html

  3. 3 x MicroSD Cards: https://www.canakit.com/raspberry-pi-sd-card-noobs.html

  4. 1 x Micro HDMI Cable - https://www.canakit.com/raspberry-4-mico-hdmi-cable.html

  5. 1 x Micro USB Cable (Pi 4 kit) - https://www.canakit.com/raspberry-pi-4-complete-starter-kit.html

  6. 3 x Heat Sink sets - https://www.canakit.com/raspberry-pi-4-heat-sinks.html

  7. 1 x HDMI Display - https://www.amazon.com/gp/product/B07WW4GMVR/ref=ppx_yo_dt_b_asin_title_o06_s00?ie=UTF8&psc=1

  8. 1 x Keyboard - https://www.canakit.com/raspberry-pi-wireless-keyboard-rii.html

  9. 3 x Ethernet Cables - https://store.ui.com/collections/unifi-accessories/products/unifi-ethernet-patch-cable-with-bendable-booted-rj45

  10. 3 x HighPi Raspberry Pi Case for Pi4 - https://www.pishop.us/product/highpi-raspberry-pi-case-for-pi4/

  11. 3 x GPIO Stacking Header for Pi - https://www.pishop.us/product/gpio-stacking-header-for-pi-extra-long-2-20-pins/

  12. 3 x 2X2 Pin - https://www.pishop.us/product/2x2-pin-2-54mm-double-row-female-straight-header/

  13. 3 x Brass Standoffs, M2.5 x 15mm - https://www.pishop.us/product/brass-standoffs-m2-5-x-15mm-package-of-8/

  14. Screwdriver (I didn’t use it, but it can come in handy)

I used 11, 12 and 13 in order to still have access to the GPIO pins and also to raise the PoE Hat enough to make space for the heat sinks.

Assembly

The following steps outline the how I assembled the Raspberry Pi’s. There are probably a million ways to do this. This works for me. Do it your way if you want.

1.     Remove Raspberry Pi from box

2a - In box.png

2.     Attach the heat sinks (please refer to the excellent diagram from CanaKit in their Quick Start Guide (https://www.canakit.com/Media/CanaKit-Raspberry-Pi-Quick-Start-Guide-3.2.pdf), which I also used to help identify these steps

2b - Canakit.png

a.     Remove the heat sinks from the plastic

5 - Remove Heatsinks.png

b.     Peel off the protective film from the heat sink

6 - Remove Protective Film.png

c.     Press the heat sink onto the Raspberry Pi using the CanaKit diagram above

7 - Heatsinks on.png

1.     Attach M2.5 x 15mm Standoffs, 2X2 Pins, and GPIO Stacking Headers 

a.     Attach the standoffs using the supplied screws to the board at the four hole points – I personally do this finger tight

8 - Standoff on.png

b.     Push the 2x2 Pins onto the PoE Header (see diagram in step 2) – Be careful not to bend the pins

9 - 2x2 On.png

c.     Push the GPIO Stacking Header onto GPIO Header (see diagram in step 2) – Be careful not to bend the pins

10 - GPIO Header.png

1.     Attach PoE hat

a.     Remove the PoE hat from the box and static bag – you can discard the screws and standoffs that came with the PoE Hat

11 - Removed PoE Hat.png

b.     Carefully press the hat onto the GPIO and PoE Header pins

12 - PoE Hat On.png

c.     Screw the 4 nuts onto the standoffs

1.     Put the Raspberry Pi in its case

 a.     Attach the feet to the bottom of the case

14 - Feet on case.png

b.     Flip over the case and follow the instructions printed in the case

15 - Case Instructions.png

c. First insert the front of the board and then snap it into place

16 - In Case.png

d. Snap the lid into place

17 - Lid On.png

6. Set up a monitor and keyboard If Desired (we will use ssh in this set of instructions)

Attach the Raspberry Pi to a USB keyboard and HDMI display if desired. A HDMI television will work as well. The HDMI port on a computer will not work since it is out, not in. If you want to capture this onto the computer, use a capture card. Keep in mind the USB-C port on the Raspberry Pi is used to power the unit if you are not using PoE. I have a different keyboard in the image below, but it wasn't used in the configuration.

Refer to the diagram in section 2 for the connection ports.

18 - Keyboard and Monitor.png

This article continues in Part 2 - Base Install and Configuration

Step-by-Step Build of a Federated Data Caching Appliance: Part 2 - Base Install and Configuration

$
0
0

Overview

This post is a continuation of Part 1 - Overview of the Components and Their Assembly. In this article we will install the base operating system, Ubuntu, and get the Raspberry Pi’s (RPI) ready to install Radiant Logic. We will start by flashing the Micro SD card, assigning the RPI a static IP address, and then update Ubuntu.

Table of Contents

Prepare the Micro SD Card and Configure Ubuntu

1. The first step will be to flash the memory card in order to write the base OS to the Micro SD card

a. Download and install the Raspberry Pi Imager for your OS at: https://www.raspberrypi.org/downloads/

19 - Raspberry Pi Imager Download.png

b. Put the Micro SD card in an adapter and mount it on your computer

20 - micro SD Card in Adapter.png

c.     Open the Raspberry Pi Imager

21 - Raspberry Pi Imager Home.png

d. Click the [CHOOSE OS] button

22 - Choose OS.png

e. Select: Ubuntu

23 - Select Ubuntu.png

f. Select: Ubuntu 20.04.01 LTS (Raspberry Pi 3/4) - 64-bit server OS for arm64 architectures

24 - Select 64-bit.png

g.     Click the [CHOOSE SD CARD] button

25 - Choose SD Card.png

h. Select the inserted Micro SD card (NOTE: Be careful to select the correct drive or you can permanently lose data)

26 - SD card chosen.png

i. Click the [WRITE] button

27 - After write button.png

j. Click the [YES] button (enter admin credentials if needed)

28 - Writing Card.png

k. Click the [CONTINUE] button once the imager finishes

29 - After writing card.png

2. Remove the micro SD card and put the card into Raspberry Pi

30 - Insert SD Card.png

3. Plug in the ethernet connection (or power if not using PoE)

31 - Plug In Ethernet.png

4.     Load a terminal window or ssh client and ssh to the Raspberry Pi (the use of ssh and a ssh client is beyond the scope of this article)

 a.     To locate the IP address of the Raspberry Pi, consult your router’s instructions or use the following method:

 On Ubuntu and Mac OS use the command:

arp -na | grep -i "b8:27:eb"

 If this doesn't work and you are using the latest Raspberry Pi 4, instead run:

arp -na | grep -i "dc:a6:32"

 On Windows:

arp -a | findstr b8-27-eb

 If this doesn't work and you are using the latest Raspberry Pi 4, instead run:

arp -a | findstr dc-a6-32

 b.     This returns output similar to the following:

 (xx.xx.xx.x) at b8:27:eb:yy:yy:yy [ether] on xxxxxx

 5.     Use the following credentials:

  • ID: ubuntu

  • Password: ubuntu 

6.     The login screen loads

32 - Login screen.png

7. Enter the ubuntu user's password (note that it will not be displayed)

33 - Ubuntu user password.png

8. Enter and confirm a secure password (note that it will not be displayed)

34 - Enter new password.png

9. The connection will close

10. ssh back to the Raspberry Pi with the ubuntu user's new password

35 - ssh with new password.png

11. Next update the apt repository: sudo apt update

36 - APT Update.png

12. The packages are updated

37 - Packages updated.png

13. Upgrade the software packages: sudo apt upgrade

38 - Upgrade apt.png

14. Select 'Y' to upgrade the Raspberry Pi

39 - Select Y.png

15. The upgrade process completes

40 - Upgrade complete.png

16. Set the hostname for the Raspberry P (replace <HOSTNAME> with your desired name)i: sudo hostnamectl set-hostname <HOSTNAME>

41 - Set hostname.png

17. Validate that the /etc/hosts file does not contain any other names (remove them leaving the localhost entry): more /etc/hosts

42 - Validate hosts.png

18. Assign static IP to the Raspberry Pi based upon your router's configuration. The router specific configuration for this is beyond the scope of this article.

43 - Static IP.png

19. Reboot the Raspberry Pi: sudo reboot

44 - Reboot.png

The base setup is complete. Repeat this process on the other two Raspberry Pi’s.

This article continues in Part 3: Radiant Logic Install Instructions

Step-by-Step Build of a Federated Data Caching Appliance: Part 3 - Radiant Logic Install Instructions

$
0
0

Overview

This post is a continuation of Part 2 - Base Install and Configuration. In this article we will install the Radiant Logic FID and complete any initial configuration steps.

NOTE: These are non-standard and unsupported configurations. Refer to screenshots from previous articles for steps in this section without screenshots.

Table of Contents

Radiant Logic Install Instructions

1.     Install the Leader server

a.     SSH to your #1 server

b.     Log in to the Raspberry Pi

c.     Next update the apt repository: sudo apt update

d.     Install the default OpenJDK: sudo apt install default-jdk

45 - Install Java.png

e. Enter ‘Y’ if prompted

46 - Java enter Y.png

f. The installer completes

47 - Java Installed.png

g. Next update the packages to install OpenJDK 8: sudo apt update

h. Install OpenJDK 8: sudo apt install openjdk-8-jdk

48 - Install JDK 8.png

i. Enter ‘Y’ if prompted

49 - Enter y jdk 8.png

j. The installer completes

50 - jdk8 installer completes.png

k. Follow the same instruction to install net tools: sudo apt update && sudo apt install net-tools

l. Make the following two directories in /home/ubuntu

  • Apps

  • Installers

51 - make base directories.png

m. Create a radiantlogic directory under the Installers directory

52 - create radiantlogic directory.png

n. Obtain the latest FID slim package and template file from Radiant Logic

o. Update the install-test.properties template file with the details for your installation (Note: use a more secure password)

# Is this the first node of the cluster? (true/false)
cluster.firstnode=true 
# If you plan on deploying multiple clusters (e.g. across data centers), the cluster name for each data center must be unique.
cluster.name=RPI
# The FQDN to use to address this node
# Leave blank to resolve it automatically
node.host.name=cluster1.marauder.local
# Is this node a follower only? (true/false)
node.follower.only=false
# Use this node as a coordination node? (true/false)
node.use.local.zk=true
# Whether to install the samples/demo resources (true/false)
install.samples=false
# OS user/login to use
# Leave blank to use current user
install.user=
# ZooKeeper connection string (comma separated list of host:port)
# zk.connstring=host1:2181,host2:2181,host3:2181
#
# - When not using an external ZK ensemble:
#    For the first node, leave it blank to create a new ZK ensemble
#    For the following nodes, please specify the connection string of the existing nodes (at least 1)
# - When using an external ZK ensemble:
#     Please enter the connection string of the external ZK ensemble.
zk.connstring=
zk.login=admin
zk.password=Passw0rd!
 
#########################################################
# Parameters below are only relevant for the first node #
#########################################################
zk.client.port=2181
zk.ensemble.port=2888
zk.leader.port=3888
zk.jmx.port=2182
vds.admin.login=cn=Directory Manager
vds.admin.password=Passw0rd!
vds.ldap.port=2389
vds.ldaps.port=1636
# Use TLS? (true/false)
vds.ssl.tls=false
vds.admin.http.port=9100
vds.admin.https.port=9101
vds.http.port=8089
vds.https.port=8090
scheduler.port=1099
webapps.http.port=7070
webapps.https.port=7171
appserver.login=admin
appserver.password=Passw0rd!
appserver.admin.port=4848
appserver.jmx.port=8686

p. sftp the slim package and template file to the /home/ubuntu/Installers/radiantlogic directory on the Raspberry Pi (the use of sftp is beyond the scope of this article)

53 - sftp files.png

q. Edit the /etc/environment file: sudo nano /etc/environment

54 - edit environment file.png

r. The /etc/environment file loads

55 - etc environment file.png

s.     Add the following lines to the file:

RLI_HOME="/home/ubuntu/Apps/vds"
RLI_JAVA_HOME="/home/ubuntu/Apps/vds/jdk/jre"
RLI_APPSERVER_HOME="/home/ubuntu/Apps/vds/appserver/glassfish"
56 - add env variables.png

t. Save the file: ^S ^X

u. Reboot to set the environment variables: sudo reboot

v. Copy the FID package to the /home/ubuntu/Apps directory: cp radiantone_7.3.10_slim_linux_64.tar.gz /home/ubuntu/Apps/

57 - copy fid package.png

w. Change to the /home/ubuntu/Apps directory: cd /home/ubuntu/Apps

58 - change to Apps directory.png

x. Extract the FID package (use the current package name for your installation): tar xvzf radiantone_7.3.10_slim_linux_64.tar.gz

59 - extract package.png

y. The files are extracted into the vds directory

60 - extracted files.png

z. Remove the package if desired (use the current package name for your installation): rm radiantone_7.3.10_slim_linux_64.tar.gz

aa. Change to the vds/jdk directory

61 - jdk directory.png

bb. Backup and remove the jdk files in the jdk directory: sudo rm -rf *

62 - rm jdk.png

cc. Copy the OpenJDK 8 files to this location: cp -r /usr/lib/jvm/java-8-openjdk-arm64/* .

63 - copy jdk files.png

dd. Add your license.lic file to: /home/ubuntu/Apps/vds/vds_server

64 - add license.lic.png

ee. Change back to the /home/ubuntu/Installers/radiantlogic directory

65 - Return to Installers radiantlogic directory.png

ff. Run the following command: sudo $RLI_HOME/bin/instanceManager.sh --setup-install $PWD/install-test.properties

66 - Run Instance Manager.png

gg. Instance Manager completes creating the first node

67 - instance manager completes.png

a.     Start FID and the Control Panel using Radiant Logic’s documentation

2.     Install the follower nodes (note that many of the steps are similar and screenshots are not duplicated):

a.     SSH to your follower server

b.     Log in to the Raspberry Pi

c.     Next update the apt repository: sudo apt update

d.     Install the default OpenJDK: sudo apt install default-jdk

e.     Enter ‘Y’ if prompted

f.      The installer completes

g.     Next update the packages to install OpenJDK 8: sudo apt update

h.     Install OpenJDK 8: sudo apt install openjdk-8-jdk

i.      Enter ‘Y’ if prompted

j.      The installer completes

k.     Follow the same instructions to install net tools: sudo apt update && sudo apt install net-tools

l.      Make the following two directories in /home/ubuntu

  •  Apps

  • Installers

m.   Create a radiantlogic directory under the Installers directory

n.     Obtain the latest FID slim package and template file from Radiant Logic

o.     Update the install-test.properties template file with the follower details for your installation (Note: use a more secure password)

# Is this the first node of the cluster? (true/false)
cluster.firstnode=false
# If you plan on deploying multiple clusters (e.g. across data centers), the cluster name for each data center must be unique.
cluster.name=RPI
# The FQDN to use to address this node
# Leave blank to resolve it automatically
node.host.name=cluster2.marauder.local
# Is this node a follower only? (true/false)
node.follower.only=false
# Use this node as a coordination node? (true/false)
node.use.local.zk=true
# Whether to install the samples/demo resources (true/false)
install.samples=false
# OS user/login to use
# Leave blank to use current user
install.user=
# ZooKeeper connection string (comma separated list of host:port)
# zk.connstring=host1:2181,host2:2181,host3:2181
#
# - When not using an external ZK ensemble:
#    For the first node, leave it blank to create a new ZK ensemble
#    For the following nodes, please specify the connection string of the existing nodes (at least 1)
# - When using an external ZK ensemble:
#     Please enter the connection string of the external ZK ensemble.
zk.connstring=cluster1.marauder.local
zk.login=admin
zk.password=Passw0rd!
 
#########################################################
# Parameters below are only relevant for the first node #
#########################################################

p.     sftp the slim package and template file to the /home/ubuntu/Installers/radiantlogic directory on the Raspberry Pi (the use of sftp is beyond the scope of this article)

q.     Edit the /etc/environment file: sudo nano /etc/environment

r.     The /etc/environment file loads

s.     Add the following lines to the file:

RLI_HOME="/home/ubuntu/Apps/vds"
RLI_JAVA_HOME="/home/ubuntu/Apps/vds/jdk/jre"
RLI_APPSERVER_HOME="/home/ubuntu/Apps/vds/appserver/glassfish"

t. Save the file: ^S ^X

u. Reboot to set the environment variables: sudo reboot

v. Copy the FID package to the /home/ubuntu/Apps directory: cp radiantone_7.3.10_slim_linux_64.tar.gz /home/ubuntu/Apps/

w. Change to the /home/ubuntu/Apps directory: cd /home/ubuntu/Apps

x. Extract the FID package: tar xvzf radiantone_7.3.10_slim_linux_64.tar.gz

y. The files are extracted into the vds directory

z. Remove the package if desired: rm radiantone_7.3.10_slim_linux_64.tar.gz

aa. Change to the vds/jdk directory

bb. Backup and remove the files in this directory: sudo rm -rf *

cc. Copy the OpenJDK 8 files to this location: cp -r /usr/lib/jvm/java-8-openjdk-arm64/* .

dd. Add your license.lic file to: /home/ubuntu/Apps/vds/vds_server

ee. Change back to the /home/ubuntu/Installers/radiantlogic directory

ff. Run the following command: sudo $RLI_HOME/bin/instanceManager.sh --setup-install $PWD/install-test.properties

gg. Instance Manager completes creating the follower node

hh. Start FID and the Control Panel using Radiant Logic's documentation

ii. Repeat these steps on any other remaining follower nodes

68 - cluster.png

The Radiant Logic cluster is now installed.

This article continues in Part 4: Implementing the Use Case


Step-by-Step Build of a Federated Data Caching Appliance: Part 4 - Implementing the Use Case

$
0
0

Overview

This post is a continuation of Part 3 - Radiant Logic Install Instructions. Now that the infrastructure is in place, it is time to implement the use case from the previous blog article introducing this concept. As a reminder, the plan is to deliver a view of the HR, CRM and inventory systems, and to join the information into a common view. Once that is available, logic can be implemented to make determinations about activities that are required. We are going to identify salespeople on vacation with customers that have low inventory. Keep in mind that there are multiple ways to do this. I am leveraging this approach to simplify the configuration. This is not what I suggest for production implementations.

Table of Contents

Implementing the Use Case

To create this example, I generated three proxy views that point to the underlying repository. No data is stored on the appliance itself in the scenario.

69 - Proxy List.png

The first view is the proxy (customers) for the customer data. The purpose of this view is to bring in the customer list, the customer's sales representative and their current inventory of Mad Marauder t-shirts. To make the data easier to consume, I created a computed attribute which calculates the inventory for a customer and concatenates it with the customer name.

70 - Customer Inventory Computed Attribute.png

The second proxy (data) is the list of internal employees. A computed attribute has been created that creates the data needed for the sales dashboard by combining the salesperson’s name with the inventory attribute that was generated on the customer proxy.

71 - sales dashboard computed attribute.png

The customerInventory attribute is made available by joining the data proxy to the customer proxy and returning that attribute and related values.

72 - sales customer join.png

Similarly, the manager view proxy (managerview) is joined to the employee view to bring back the salesDashboard attribute and associated values.

73 - Manager View Sales Dashboard.png

For this example, the following screenshot shows the list of customers in the system. Note that this list is being dynamically generated in the proxy and is not a static representation of the information.

74 - Customer view.png

There are also two employees listed for this example. Ann is the employee on vacation and Steven is her manager.

75 - Employee view.png

Looking at the manager branch there are two things to consider. The first is that the users have been limited to only managers. Second, Ann is displayed in the salesDashboard attribute that was brought in through the join. Remember that join was the result of a join from the employees to the customers in order to get the customer inventory. Sample logic is also displayed to show that Ann is on vacation.

76 - Manager view.png

This data can then be leveraged by a web application to make it easily consumable.

Web Browser Image.png

By taking advantage of lower cost hardware that can be easily clustered, we were able to build a solution that gave Steven important insights into the company’s customers. He now knows that Global Inc has 0 inventory and their sales rep is out of the office. There are limitless other uses that can be found for such a device. Maybe you can come up with a couple as well.

CoreBlox Named a Ping Identity Advanced Delivery Partner

$
0
0

FOR IMMEDIATE RELEASE: December 3, 2020

CoreBlox LLC

877-TRY-BLOX

info@coreblox.com

 

CoreBlox Named a Ping Identity Advanced Delivery Partner

Level reflects the growth of CoreBlox’s Ping Practice in 2020

 
partner-badge-advanced.png
 

 

 

New York, NY: CoreBlox LLC, a leading provider of Identity & Access Management services and solutions, today announced that it has been named a Ping Identity Advanced Delivery Partner. The Advanced designation is a reflection of the significant investment and continued commitment by CoreBlox in this longstanding partnership.

 

“Becoming an Advanced Delivery Partner is a recognition of the investment we’ve made in preparing our services team to address the growing needs of digital transformation with Ping Identity’s industry leading solutions,” says Chad Northrup, President at CoreBlox. “Our clients can be assured that they’ll be working with vendor certified resources who have a track record for delivering successful Ping deployments.”

 

In addition to meeting the required benchmarks for consultant certifications and delivery, CoreBlox was also recognized with an Innovator Specialization for a unique solution that was developed for a leading automotive manufacturer. The solution leveraged the CoreBlox Token Service, which allows PingFederate to securely exchange tokens with Symantec SiteMinder.

 

To learn more about CoreBlox and how we enable digital transformation through Identity & Access Management, please visit www.coreblox.com.

 

About CoreBlox

 

CoreBlox, a Division of Winmill, is a premier provider of Identity & Access Management solutions for enterprise, federated, and cloud environments. We partner with leading industry vendors such as Ping Identity, Radiant Logic, SailPoint & Strata to ensure that we are able to deliver the optimal solution for our clients’ unique needs. From strategy & architecture to deployment and ongoing management, CoreBlox helps to make identity a strategic advantage.

Digital Transformation and Identity and Access Management

$
0
0

Why You Need to Consider Identity and Access Management When Defining Your Digital Transformation Strategy

Overview

Your Identity and Access Management (IAM) strategy plays a key role in determining your digital transformation strategy. When evaluating business processes, the security of those new processes must be considered. While it is necessary to implement strong security practices, consideration of usability and ease-of-use needs to be factored into the design. This article will outline ways in which you can incorporate Identity and Access Management processes into your digital transformation strategy.

 

What is Digital Transformation

Digital transformation is a re-engineering of your business processes to take advantage of modern technologies. It is not just a matter of just taking a process and making it digital, but also a review and examination of how your business is done and how it can be made better. The key to digital transformation is that it is about the customer at its core. New technologies and processes can be used to define new ways to do business.

Digital transformation goes beyond a single organization. New processes need to cross the historic corporate silos, allowing you to define processes that bring together marketing, sales and services in how you engage your customers. These new processes can deliver a significant competitive advantage over companies that continue with legacy processes.

While it is easy to think of a customer as someone who buys your goods and services, it is important to keep in mind that employees are customers as well. Employees have embraced the modern age and expect to be able to interact with their employers in a truly connected fashion. This “always on” environment must be considered as you look to define your digital transformation strategy.

 

How Does Identity and Access Management Play a Role

Identity and Access Management plays a key role in your digital transformation strategy. It contains the underlying processes that manage identities across your corporate systems and provides the front door to access those systems. IAM technologies must be reviewed and part of your digital transformation analysis. Inclusion of your security organization as part of the process is a necessity. 

Remember that when reviewing your IAM strategy, both customers and employees are a direct consideration. Customer behavior begins with how you manage that customer’s identity and how you determine that customer’s identity when interacting with your systems. Employees need access to that information in a secure and easy to use fashion. Overly complex authentication processes, while perhaps highly secure, have a negative impact on user experience. The use of manual or complicated identity management processes will only result in poorly managed identities. This makes it challenging to ensure correct system access, define the processes around managing that access, and certifying the identities for compliance purposes.

Your IAM digital transformation strategy is your first step into gaining visibility into the complete view of customer behavior. You must securely identify the user before you can allow access and can determine the identity of the user. Additionally, security behaviors can inform your decisions based upon where customers are logging-in, how customers prefer to authenticate, what systems the users are accessing, or even when users are using your resources. This information can help not only your security practices to identify potential security breaches, but also these behaviors can be shared with other teams to determine how to best serve customers and market additional services to those users.

 

Identity and Access Management Key Factors

When starting your reimagining of your Identity and Access Management processes, your main consideration is how you can personalize the experience of your customer interactions. While this is not solely the purview of your IAM systems, this experience begins with those systems and is a factor with every click the user makes. The ability to ensure the usability of those processes, the ability to build an interface that best suits your customer’s needs, and the information you gain from those clicks are all factors that need to be considered.

Engaging customers where they are leveraging new technologies is at the core of digital transformation. While something as simple as social sign-in seems minor, acknowledging that user behavior is driven by common online interactions simplifies the user experience. However, as part of that analysis, the level of security required must be considered. Perhaps signing-in with an Apple ID is sufficient when the user is accessing the system from a known location, but if the user is signing-in from a new location or performing a sensitive transaction an additional factor to identity the user is required. These authentication policies are an example of ways to engage your customer that simplify the user experience. Additionally, single sign-on ensures that the user is not prompted multiple times, reducing user dissatisfaction, and better secures the environment as the user crosses system and application components.

In order to provide a unified customer experience, it is necessary to enhance user profiles for better personalization. Data regarding the user may exist in multiple systems. Bringing those attributes together allows you to enrich that profile to provide a better customer experience. Technologies like a Federated Identity Service allow you unify what you know about the customer without needing to have each system connect to multiple backends to get that data. As an identity integration layer, these services allow you to better unify identity information, improve security, create custom views into identity attributes, and even persist data locally as needed. This integration layer speeds deployments and simplifies the integration across systems. This puts the customer at the core by bringing together all that you know about that users, and also centralizes the access to user information which can be used to determine user behavior. This improves your ability to scale your systems and also future-proof your security infrastructure.

Other ways to speed the deployment of your IAM digital transformation is to leverage cloud based services for your identity infrastructure. There are several considerations in leveraging a cloud based service. The primary consideration is how much control you need over your user identities. For highly secure environments, hosting those identities offsite may not be possible. Another consideration is how many applications you have are cloud based or have a mechanism for federated sign-in with technologies like SAML. If you have a high number of on-premises applications, a cloud based identity service may not be as relevant a choice. However, keep in mind that one of the main drivers for digital transformation is to review those applications and to determine if those applications can be modernized. Even factors such as if you are securing customer facing or employee facing systems need to be considered. The licensing costs for large customer facing systems may make some cloud based services untenable. 

Developing a strategy for delivering your IAM components as microservices also speeds your time to market. This allows you to externalize security from the applications and centralize the management of security policy without the need to deliver monolithic legacy technologies. Microservices allow applications to be created using a collection of loosely coupled services. The services are fine-grained and lightweight. This improves modularity and enables flexibility during the development phase of the application, making the application easier to understand. When designing applications, identity becomes a key factor to building out a personalized user experience. Identity also enables other microservices for tasks like authorization, single sign-on, identity management and compliance. These microservices can then be leveraged to engage the customer on the platform of their choice. Whether it is a mobile application or a website, a common personalized experience can be delivered.

Embracing DevOps practices can also modernize your Identity and Access Management infrastructure and processes. DevOps combines your IAM processes and technologies with your IT operations. This can help shorten your release cycles and improve the quality of your systems. Leveraging an agile approach to your releases brings incremental successes and eliminates the historic “big-bang” approach to delivering IAM technologies. Technologies like Kubernetes for orchestration help automate the deployment, scaling and management of your IAM infrastructure. When built with microservices in mind, individual components of your IAM infrastructure can be enhanced and delivered in an automated fashion without the risk of impacting your entire IAM environment.

Embracing new technologies around Artificial Intelligence (AI) should also be part of your IAM digital transformation strategy. AI allows you to gain insights into user behavior that may not be otherwise possible. This improves your ability to provide a more secure environment and to better detect breaches. It also provides insights into user behavior that can drive marketing and sales campaigns.

Remember that your customers includes your employees. When defining your IAM digital transformation strategy, consider technologies that improve the user experience, expand access to modern technologies and allow users to leveraging the devices of their choice. This requires evaluation and implementation of the same principles that were leveraged for external customers. Look at simplifying the security interactions through authentication policies, easy to use multi-factor authentication (MFA), single sign-on, and access to collaboration technologies that can be leveraged in a secure manner. Look at zero-trust network principles by using technology to determine the level of confidence you have in systems connected to your network and the behavior of your internal users.

 

Example Implementation

The principles of Identity and Access Management as part of digital transformation can be highlighted by the example of a large bank in New York. This bank was looking to provide a better customer experience and to improve the overall security of their systems. Their goals included delivering a new online customer banking experience, learning more about their customers, and leveraging targeted marketing to up-sell banking services in a personalized manner. Additionally, this included the delivery of new mobile based banking tools to better engage their customers.

This bank delivered a system that combined a platform for online and mobile banking with Identity and Access Management tools needed to secure and personalize the user experience. By leveraging technologies that were tightly integrated, the bank was able to engage with the user on the platform of their choice. This also allowed the bank to get a full view of the users’ activities and deliver marketing during the sign-in flow. This marketing was specific to the users profile which was unified through a federated identity service. The process of “knowing your customer” (KYC) helped to ensure that the user was correctly identified from initial registration through to performing secured interactions.

The bank also delivered a simplified MFA experience by leveraging policy based authentication and step-up. Users where initially challenged for a second factor which was incorporated into the core login flow. The step-up authentication appeared to be no different from when a user was directly logging-in and required no additional factors aside from the KYC processes. The risk associated with the customers transactions was evaluated and step-up authentication was only needed when the user was authenticating from a new device, new location, or when performing a higher risk transaction. Additionally, user behavior was evaluated to ensure that a user was not logging-in from two different locations in the world at the same time.

This implementation improved customer satisfaction and expanded business offerings. Customers were now able to interact with the bank through the platform of their choice and security was delivered in a seamless, easy-to-use, manner. The bank was able to better identify the complete profile of the user and provide a customized experience. This included marketing of new services in a way that was unobtrusive and effective.

 

Common Mistakes

There are several mistakes that can be avoided to help ensure a successful IAM digital transformation strategy. The biggest technical mistake is leveraging non-integrated tools to deliver you IAM infrastructure. This overly complicates the deployment and also introduces potential security gaps. Look to use tools that are either already tightly integrated or have predefined integration. Validate those systems through a upfront proof-of-concept before making a significant purchase decision. 

Additionally, waiting for the “big-bang” release greatly increases risk and reduces your ability to show incremental improvements. Management support for the IAM digital transformation strategy is critical and being able to show quick benefits improves confidence in the solution. If possible, leverage systems that you can easily replace by leveraging smaller services that can be delivered in an agile fashion.

Not taking advantage of seasoned consultants who can help you define and deliver your IAM digital transformation strategy can also hurt your chance of success. Leverage the experience of integrators who have helped other organizations deliver on their strategy. The adage “penny wise pound foolish” is applicable here. Delivery of your strategy and showing success ensure long-term benefit from your IAM solution and executive support.

 

Conclusion

Your Identity and Access Management digital transformation strategy is a key part of not only your security, but also is a part of your overall digital transformation strategy. IAM provides the foundational layer that supports all of your reimagined business and technological processes. Look at putting the customer first, whether that customer is a buyer or an employee. The user experience is key and that experience can be driven by a powerful identity integration layer and easily consumable microservices.

To deliver upon this strategy start with an internal assessment and review your legacy infrastructure. Identify what is the largest problem and look to address those problems first. Incremental delivery is a clear path to success. Remember that flexibility is important when determining your IAM strategy. Do not lock yourself in to a specific flow if other approaches may provide more benefit. Collaboration is a core part of your strategy. You need buy-in and support across the business to deliver on your new IAM digital transformation strategy.

PingFederate cluster across Multiple Kubernetes Clusters on GCP

$
0
0

Overview

117396376-d4fce300-aec7-11eb-9112-c8eb66b3015d.png

This document discusses how to setup an adaptive PingFed cluster through dynamic discovery with the DNS_PING protocol, which is the recommended approach for PingFederate 10.2.

 

Key Concepts

Dynamic discovery is well suited for environments where traffic volume may spike and require additional resources during the peak period to handle the increased traffic. This elastic scaling capability helps you to bring additional PingFederate engine nodes online with no additional configuration changes after the initial setup.

Google's CloudDNS is Ping's recommended approach in GKE because it works seamlessly with GCP and JGroup's DNS_PING protocol.

ExternalDNS is a set of workloads to be deployed inside a kubernetes cluste. It synchronizes exposed Kubernetes Services and Ingresses with DNS providers. It makes Kubernetes resources discoverable via public/private DNS servers. It allows you to control DNS records dynamically via Kubernetes resources in a DNS provider-agnostic way.

CloudDNS is a GCP service providing low latency and high availability DNS zone serving. It can act as an authoritative DNS server for public zones that are visible to the internet, or for private zones that are visible only within your network.

 

Prerequisites

  • register a new google account and activate $300 credit

  • install following tools on your laptop

    • gcloud (gcp sdk)

    • kubectl (kubernetes command-line tool)

    • Visual Studio code (IDE)

    • github

 

Preparations:

1. Set up a VPC network with two subnets for us-east and us-west regions

  • Path: VPC network / VPC networks/ Create VPC network

117398485-7128e900-aecc-11eb-948e-022988ecf5c6.png
 

2. Create two kubernetes clusters in us-east and us-west

  • Path: Kubernetes Engine / Clusters / Create

117398665-cebd3580-aecc-11eb-83ef-004f61952994.png
 

3. Create a GCE persistent disk (gke-pf-disk) in us-east. It will later be mounted on the PingFed Console pod to persist configuration data

  • Path: Compute Engine / Storage - Disks / Create Disk

117398700-e0064200-aecc-11eb-9073-0782f57cf818.png
 

4. Create two cloud DNS private zones

  • Path: Network services / Cloud DNS / Create a DNS zone

  • Note: select the VPC network you created in Step 1 so that these private zones become visible to all entities (vm, nodes, pods, etc) within the network

117398889-4c814100-aecd-11eb-9cea-c4baab350099.png
 

5. Allow traffics for pod-to-pod commnucations across kubernetes clusters

  • Path: VPC network / Firewall / Create Firewall Rule

  • Note: ingress and egress traffics for ports 7600 and 7700 should be allowed to pass.


6. [Optional] VPC peering if your kubernetes cluster are located in different networks

  • Path: VPC network / VPC network peering / Create Peering Connection

 

Deploy

1. Clone https://github.com/CoreBlox/ping-federate-gcp.git to local

 

2. Connect to the us-east kubernetes cluster

  • [trick] you can get the gcloud command from GCP console.

117400434-9d466900-aed0-11eb-8cf8-b4222eda07a2.png
  • click the 'connect' option for the cluster you want to connect to. Then run the command on your laptop or in Cloud Shell

 

3. Go to the us-east folder

cd ./ping-federate-gcp/clustered-pingfederate-us-east
 

4. Prepare deployment.yml file with the kustomize utility

export PING_IDENTITY_K8S_NAMESPACE=default

kustomize build . | \
 envsubst '$' > deployment.yml
 

5. Deployment k8s workload

kubectl apply -f deployment.yml
 

6. Go to the us-west folder

 

7. Repeat step 4-5

 

Validation

1. Kubernetes Cluster - pods info (us-east)

NAME                                     READY   STATUS    RESTARTS   AGE   IP            NODE                                                  NOMINATED NODE   READINESS GATES
pod/external-dns-7b5bb8879-pnhxv         1/1     Running   0          13m   10.116.2.17   gke-cluster-us-east-default-pool-clus-18a3dac7-bc7v   <none>           <none>
pod/pingfederate-8484cd5f6-8c8j6         1/1     Running   0          13m   10.116.2.18   gke-cluster-us-east-default-pool-clus-18a3dac7-bc7v   <none>           <none>
pod/pingfederate-admin-9f5d68f45-mczfg   1/1     Running   0          13m   10.116.0.11   gke-cluster-us-east-default-pool-clus-18a3dac7-jr4h   <none>           <none>
 

2. Kubernetes Cluster - pods info (us-west)

NAME                                READY   STATUS    RESTARTS   AGE    IP            NODE                                                  NOMINATED NODE   READINESS GATES
pod/external-dns-5b9567c765-n79ll   1/1     Running   0          5m7s   10.240.0.11   gke-cluster-us-west-default-pool-clus-0a7565e7-0903   <none>           <none>
pod/pingfederate-6df6cd7f79-jf4ps   1/1     Running   0          5m7s   10.240.1.9    gke-cluster-us-west-default-pool-clus-0a7565e7-ssn2   <none>           <none>
pod/pingfederate-6df6cd7f79-wcdsx   1/1     Running   0          5m7s   10.240.0.12   gke-cluster-us-west-default-pool-clus-0a7565e7-0903   <none>           <none>
 

3. Cloud DNS records (us-east)

gcloud dns record-sets list \
    --zone "ping-us-east" \
    --name "pingfederate-cluster.ping-us-east.google.internal" \
    --type A

NAME                                                TYPE  TTL  DATA
pingfederate-cluster.ping-us-east.google.internal.  A     300  10.116.0.11,10.116.2.18
 

4. Cloud DNS records (us-west)

gcloud dns record-sets list \
    --zone "ping-us-west" \
    --name "pingfederate-cluster.ping-us-west.google.internal" \
    --type A

NAME                                                TYPE  TTL  DATA
pingfederate-cluster.ping-us-west.google.internal.  A     300  10.240.0.12,10.240.1.9
 

5. PingFed Console service

  • port-forward the admin service and access admin console from your laptop loopback address

curl -u Administrator:2FederateM0re \
-k 'https://127.0.0.1:9999/pf-admin-api/v1/cluster/status' \
--header 'x-xsrf-header: PingFederate' | json_pp

{
  "nodes" : [
     {
        "nodeGroup" : "US-WEST-GROUP",
        "nodeTags" : "",
        "version" : "10.2.2.0",
        "index" : 623902800,
        "mode" : "CLUSTERED_ENGINE",
        "address" : "10.240.0.12:7600"
     },
     {
        "address" : "10.116.0.11:7600",
        "mode" : "CLUSTERED_CONSOLE",
        "index" : 938754485,
        "version" : "10.2.2.0",
        "nodeGroup" : "US-EAST-GROUP"
     },
     {
        "nodeGroup" : "US-EAST-GROUP",
        "nodeTags" : "",
        "version" : "10.2.2.0",
        "index" : 823652998,
        "address" : "10.116.2.18:7600",
        "mode" : "CLUSTERED_ENGINE"
     },
     {
        "nodeTags" : "",
        "version" : "10.2.2.0",
        "nodeGroup" : "US-WEST-GROUP",
        "mode" : "CLUSTERED_ENGINE",
        "address" : "10.240.1.9:7600",
        "index" : 1689306981
     }
  ],
  "replicationRequired" : true,
  "lastConfigUpdateTime" : "2021-05-06T17:14:31.000Z",
  "mixedMode" : false
}
 
117401960-80f7fb80-aed3-11eb-9c44-cbf963731000.png
 
117402018-9e2cca00-aed3-11eb-89fd-7db69c8c1408.png
 

References

Using Radiant Logic RadiantOne FID to Enable Zero Trust

$
0
0

What is Zero Trust

Zero Trust is a security principle based upon identity and data as opposed to conventional network and host-based access controls. Historically, models of securing access worked for applications that resided on-premises with either direct or VPN-based access. This model no longer applies. Resources are no longer just on-premises but are a complex hybrid of on-premises and cloud-based applications. Zero Trust is based upon the concept that you must have a way of enforcing security without relying on a perimeter. Instead, you must rely on what you know combined with other factors like risk. This is not a new concept. In fact, in 2005 Dan Hitchcock from Microsoft predicted that information security would move from network and host-based security to security based upon data.

Evolution of Information Security Technology.png

Access to resources is now determined by what you know about the request—who the user is, what devices they are using, what other risk factors can be determined—even if the user is already verified. Identity and risk are now what is most important. This risk should be assessed on every request to access resources and not just at initial access time. Once the user is securely identified, authorization policies must be defined based upon the principle of least access. This is not just granting access to applications, but also dynamically authorizing access to what you can do within the application itself.

Context is Core to Zero Trust

With identity being core to Zero Trust, what you know about the user is key to determining access. The context of a request requires understanding attributes of the identity in relation to what a user is trying to do. Authorization of access and assessment of risk is based upon what you know about a user. This assessment could be attribute based, group based or even based upon relationships between the user and other identities in the environment. Contextual information can be used to classify access requests for use by applications and security systems. Attempts to access information can now be secured based upon the relationship between the user and other factors – like a user's role in the organization – sourced from a user's global profile.

Radiant Logic RadiantOne FID is an identity integration layer that allows you to deploy scalable solutions that solve the complex challenges associated with user data. FID integrates identity data to build a unified view of heterogenous data sources. These data sources can be LDAP directories, databases, web services, and even applications. These profiles can then be delivered to applications to make authorization decisions around user access and to security systems for contextual decisions around user intent.

caching2.png

This integration layer is the source of truth for identities and their related profile attributes. Instead of building connections to identity data on an application-by-application basis, this centralized source of truth can be leveraged– externalizing and eliminating the complexity of identity profile consolidation. As new sources of identity information are incorporated into a user's global profile, those sources can be added without changes to the applications and other consumers of the identity data.

Applications and security systems not only rely on user attributes for authorization and risk determination, but also on roles. Roles are often represented by groups in an environment. Groups may not exist for systems that need access to that role definition. FID allows you to dynamically build groups from the underlying sources without requiring the creation of static groups or building repositories or manually synchronized group data.

vds and groups.png

In this example, there are three sources of identity data. The HR, Sales and Marketing groups are built dynamically based upon the data in the underlying repositories instead of manually creating the groups and synchronizing the data from those sources.

One of the other principles of Zero Trust is the concept of least access. Instead of generically granting access to all resources you should only be granted access to the minimum number of resources necessary to do your job. One of the challenges with access control is understanding the relationship between users and those who can approve access to systems. FID allows you to dynamically restructure a hierarchy based upon user attributes without having to create new static representation of user data.

change the hierarchy.png

In this example, a model has been created based upon the schema extracted from the LDAP-based enterprise directory. One of the attributes of the user is his or her manager. Identifying a user's manager is needed for access approvals. FID restructures the hierarchy of the enterprise directory to one based upon manager for consumption by access approval systems.

Security is Core to Zero Trust

Security is at the heart of Zero Trust architectures. By centralizing identity data into a solution like FID, you gain the benefit of several key factors. In addition to a unified profile, a common abstraction layer provides one point of access to all identity data. Instead of applications accessing multiple sources and having to track activity across all the sources, access is through a common location with centralized logging. This ability to abstract access to identity data provides a common access location for consumption of profile data. Now one log can be monitored by Security Orchestration, Automation, and Response (SOAR) systems.

Authentication for applications can also be improved by leveraging FID. By abstracting the backends applications, authentication can be centralized into FID instead of an application having to authenticate users against multiple backends. These authentication requests are then logged centrally instead of on a backend basis. Additionally, FID can serve as a backbone for MFA architectures. Authentication (bind) requests to FID can be protected by MFA so that a user is prompted by an authenticator application even when the application itself does not support MFA.

Session is also a key factor in Zero Trust architectures. Understanding application access based upon a user's profile can be used to kill sessions if needed for those application. Additionally, access can be added and removed dynamically based upon a user's profile at access time.

Real-time Access

Zero Trust relies on access to data in real-time. Identity data is not static and may be based upon computed logic or joined attributes. You cannot rely on data imports and additional repositories of static information to store this profile data. However, access to profile data can come from sources that are not easily accessible. Data can be cached for performance, but this also suffers from the same challenge as data imports.

Caching.png

FID allows you to not only cache data for performance with minimal response times, but to also update that data in real-time. This allows applications and security systems to make decisions at the time of user access.

Conclusion

Zero Trust is the core of the architectures of the future. Radiant Logic RadiantOne FID allows you to improve your security posture and simplifies implementations for Zero Trust. Identity and context are necessary for authorization and risk assessment. FID centralizes access and provides a unified profile of user data for your single source of truth. Additionally, centralization of access delivers common logging and a point of aggregation for authentication. Let us know when we can help you with your Zero Trust journey.

Viewing all 53 articles
Browse latest View live