Quantcast
Channel: Blog - CoreBlox
Viewing all 53 articles
Browse latest View live

Are over-privileged users putting your data at risk?

$
0
0

In today’s IoT hyper-connected world, your company’s data and intellectual property are vulnerable from more access points by more users than ever. Employees, remote collaborators and stakeholders all have some form of access to your company’s information. Yet the question IT security professionals and business owners need to be asking is this: Do authorized users have too much privileged access for the job they’re doing?

Recent high-profile data breaches at large national retailers and international banking organizations—not to mention our own federal government agencies—have resulted in some pretty massive losses of intellectual property and sensitive personal data.

Yet the bigger threat to your company might be a little closer to home. That is, your authorized users who have legitimate access to your systems might be putting your data at risk. It’s an all-too-common problem that’s on the rise as fast-paced business changes and decentralized staffing trends make it hard to manage credentials of every user at every location every minute of the day.

If you think that your authorized users are a low priority to data security consider this: Employees who have been fired, laid off or demoted—or even temporary workers and third-party vendors—might not have your company’s best interests at heart as they exit your firm. And these potentially disgruntled partners have user access to your systems and data that makes them a pretty substantial security risk.

A single enterprise can have thousands of privileged accounts, and the risks associated with each can be insurmountable should a breach occur. Traditional data security solutions simply can’t fully protect your business, and it’s easy to see why. Many old-school tools assign too much trust to all administrators, they don’t fully protect all access points, nor do they pinpoint problems with over-privileged users whose identity credentials are valid even though their roles and responsibilities for the company have changed.

Thwart threats from the inside out

Make mitigating insider threats a top priority by adopting a privileged access management solution to eliminate the risk of entitlement creep. Protect sensitive security credentials, such as administrator passwords, with enforcement across cloud, virtual and physical environments. Not only is this a requirement for compliance, it’s also a best practice for any business that wants to protect its data, applications and networks from malicious intent.

Adopt a zero-trust mentality

Some sectors, like the payment card industry, now have requirements for secure data access that restrict password privileges to the lowest level required to perform the job at hand. Companies of any size in any industry would be smart to adopt this same less-is-more access approach to avoid too much privileged access to information. Products like CA Privileged Access Manager are built upon a “zero-trust” approach that assumes administrators can’t be trusted and it limits privilege access to the most basic access needed.

The bottom line is this: The only way to fully protect your business from a data breach is to add privilege access management to your current data security initiatives. As a CA Technologies Advanced Partner, CoreBlox offers comprehensive end-to-end identity and access management leveraging CA Privileged Access Manager to fully protect clients from vulnerabilities like entitlement creep. Contact us to see how we can control privilege access for your users.


Is password chaos raining down from your hybrid cloud?

$
0
0

Hybrid cloud has its advantages, including cost-effective, flexible and scalable access to business-critical applications and data. The very nature of these unified solutions in which on-premises and virtualized assets are integrated for resource optimization also presents some real-world challenges for IT security professionals. Not the least of which is trying to manage password privileges associated with these distributed computing environments.

Among the challenges, as scale increases, so do the number of entities that require administrative access—and privileged passwords. This represents a major challenge in password management that if, left unchecked, could lead to password privileges for users whose security credentials are no longer valid.

Another vulnerability is scope. Cloud management consoles pack a powerful punch when it comes to access. Consider the amount of access an authorized administrator has through a cloud console that is used to manage the development, deployment and scale of cloud-based websites, applications and services that most likely include mission-critical business applications.

And then there’s the vulnerability being driven by the hyper-connectivity of everything, including clouds as well as the Internet of Things, which relies on machine-to-machine authentication. In this scenario, passwords are used by one system to gain access into another or, in some cases, credentials are hard-coded into applications, like SSH key pairs and PEM-encoded keys.  Considering that an organization might have thousands of keys, it’s more critical than ever to authenticate credentials of these privileged accounts to mitigate risk.

So what can an IT professional do to control the password chaos lurking inside hybrid cloud?

Mitigate risk at the cloud console level. With a comprehensive privileged access management tool, you can restrict privileges of users to only the authorized hybrid cloud infrastructure and you can record and monitor all activity, so malicious activities can be targeted and thwarted.

Automate discovery at the device level. By automating discovery of devices, systems, applications, services and accounts, including APIs required for virtualization and cloud management, you can alert administrators when new virtual machines are created and monitor activity by pulling bulk-import system lists from text files to target and identity potential malicious activity before it becomes a full-fledged breach.

 Enforce password security at the user level. By rotating passwords based on time or when triggered by an event, you can more effectively govern use by putting limits on access times and requiring multiple authorizations for access. And, you can ensure that any and all password credentials are synchronized so if they are changed at one end of the system they are changed everywhere.

With the right tools, you can manage and protect passwords for privileged accounts in hybrid cloud environments. Products like CA Privileged Access Manager can be used to automate and simplify the task of monitoring and recording privileged user activity across virtual, cloud and physical environments. CoreBlox is a CA Technologies Advanced Partner that can deploy a privileged password management solution to control access to your company’s data and systems. Contact us to learn how it works and explore the CA technologies that we leverage for success.

CoreBlox Announces Selection as CA Technologies Premier Partner

$
0
0

 

New York, June 29, 2016 - CoreBlox, with its parent company WinMill Software, announced this week that it has attained “Premier Partner” status with CA Technologies.  This prestigious title is awarded to only a handful of companies worldwide, and reflects the combined companies’ status as a top reseller of CA Technologies products.  CoreBlox is also highly regarded as one of CA’s premier services partners.  CoreBlox and WinMill partner with CA in three product areas:  CA Project and Portfolio Management, Access and Security (Single Sign-On, Identity Management and Privileged Access Management), and API Management.

“CoreBlox and CA have been strong partners for over a decade,” said Todd Clayton, President of CoreBlox.  “Since we joined WinMill in 2012, we have worked very hard to grow our sales capabilities in order to complement our professional services offerings. Combining with WinMill to achieve Premier Partner status reflects our commitment to solving complex security and APIM problems with the CA product suite. We’re looking forward to continued success for many years to come.”

About CoreBlox

CoreBlox, LLC, a wholly owned subsidiary of WinMill Software, is a CA Technologies Premier Channel Partner and Preferred Services Partner, and a leading provider of enterprise security solutions. Headquartered in Framingham, MA, the CoreBlox team specializes in single sign-on (SSO) and web access management solutions, SAML and identity federation services, and LDAP Directory Virtualization. CoreBlox has broad experience managing, executing and supporting Identity Management deployments. CoreBlox’ commitment to service excellence has won successful clients across the Fortune 500 and in a number of key verticals including banking, information technology, insurance, telecommunications, and the public sector. For more information, please visit www.coreblox.com

About WinMill Software

Founded in 1994 and headquartered in New York City, WinMill Software, Inc. is a CA Technologies Premier Channel Partner and Preferred Services Partner, and North America’s leading provider of CA PPM solutions.  WinMill has completed more than 650 PPM implementations since first partnering with CA in 2005, and boasts a team of PPM experts that are highly regarded as the most knowledgeable professionals in the industry.  WinMill has implemented PPM across virtually every vertical sector including banking and financial services, retail and hospitality, pharma, utilities, high technology, insurance and the public sector.  For more information on WinMill Software, visit www.winmill.com.

CoreBlox and CA Technologies

$
0
0

As a CA Technologies Premier Partner, CoreBlox is well equipped to help address the security issues that are facing companies as the use of mobile and cloud applications continues to expand. We've put together this video to highlight our capabilities and explain why companies turn to CoreBlox for everything from strategy planning to deployment. Contact us today to discuss your needs and how we can help!

Step-By-Step: CA SSO WebAgent Request Flow

$
0
0
Photo credit: Dean Hochman, via Flickr

Photo credit: Dean Hochman, via Flickr

When a user opens a new browser session and navigates to a CA SSO protected resource, the Agent intercepts the request at the webserver end where the protected resource resides. At the Agent end there are different Managers that each perform different actions. The following describes the flow:

1.     To start, the HighLevelAgent (HLA) passes the request to the Resource Manager. It is the responsibility of the Resource Manager to extract information about the requested resource such as HTTP_Host, Client IP address, Agent Name, URL, Method (Get, Post etc), Cookie Domain. Furthermore, it does some checks such as CSSChecking, BadURLChars, AutoAUthorize URL etc.

2.     Next comes the Session Manager. Since there is no session established yet, the Session Manager will return no action.

3.     At this point the Protection Manager is called. Now the agent, via the Low Level Agent (LLA), makes a TCP/IP connection to the Policy Server. The Policy Server, based on the Agent name and resource URL (already captured by Resource Manager above), is able to identify if the resource is Protected by CA SSO.  This information is captured by the Realm\Rule objects at Policy Server. Once the Policy Server identifies this information, it sends the response back to the Agent via TCP/IP. If the resource is protected, Agent pass on to the next manager.

4.     The Agent calls the Credential Manager.  Since there are no credentials present, the Credential Manager calls the Challenge Manager.

5.     The Challenge Manager, based on the Authentication Scheme selected (this information is captured at the Protection Manager step from the Realm), processes the credential collection mechanism via HTML Form credentials, NTC etc.

6.     At this time the user is presented with a credential collection page (based on the authentication scheme defined).

7.     On supplying the credentials, the Webagent calls the Authentication Manager. The agent again makes a TCP/IP connection to the Policy server via LLA as it cannot make this decision on its own. The Policy Server at this point tries to disambiguate/Authenticate the user based on the User Directories selected for that Domain Object. Once it authenticates the user, it generates a SessionSpec and encrypts it with SessionTicket Key and sends it along with SessionID and other Authentication responses to the Webagent.

8.     The Webagent at this point declares that the user is Authenticated and passes the request on to the Session Manager.

9.     The Session Manager at this point generates the user session, creates a SMSESSION cookie and writes to the user’s browser.

10.  Finally, the Agent calls the Authorization Manager. Since the agent again does not have the capability to make this decision it calls the Policy Server via TCP/IP. The Policy server, based on the ‘Policy’ object definition for that domain, makes the Authorization decision. It then sends the Authorization responses to the Webagent.

11.  At this point,  the Webagent has all the information it needs to make the decision. Now the Webagent passes on the control to the webserver to show the target page to the end user.

 

If the user tries to browse another protected resource using the same browser session, it will have the same flow but a few things will be different such as, the Session Manager at step 2 decodes the SMSESSION cookie already present in the browser and checks if it is a valid cookie (session timeouts etc). Furthermore, the Credential Manager at step 4 uses session information and by-passes the challenge Manager step to perform the authentication. This way single-sign-on is achieved without challenging the user while Authentication/Authorization occurs in the background.

Hopefully you will find the information above useful as you troubleshoot Webagent-related issues.

CA Single Sign-On Error# '32': Analysis & Resolution

$
0
0

By: Badal Bhushan, CoreBlox Consultant

Summary of the Problem: Customer reported that their SMPS logs gets filled up with error messages like this one:

[28393/3935185808][Thu Sep 22 2016 11:06:04][SmDsLdapConnMgr.cpp:1180][ERROR][sm-Ldap-02230] Error# '32' during search: 'error: No such object matched dn: ou=people,ou=customer,dc=xxxxxxxx,dc=com' Search Query = 'objectclass=*'

[28393/3945675664][Thu Sep 22 2016 11:06:04][SmDsLdapConnMgr.cpp:1180][ERROR][sm-Ldap-02230] Error# '32' during search: 'error: No such object matched dn: ou=people,ou=customer,dc=xxxxxxxx,dc=com' Search Query = 'objectclass=*'

The actual end users do not seem to be impacted by the error, but it creates a nuisance for administrators who review the SMPS logs.

Analysis: We enabled the Policy Server profiler logs. While reviewing the profiler logs for one of the request with Error 32, we made the following finding:

 

[09/22/2016][11:06:07.547][11:06:07][28393][3998124944][SmDsDir.cpp:66][CSmDsDir::CSmDsDir][][][][][][][][][][][][][][][][][][][About to initialize directory, Oid='0e-00059d48-066c-1174-a102-8301b3dd0000', Name='customer group bind'][][Start of call InitDir.]

[09/22/2016][11:06:07.548][11:06:07][28393][3998124944][SmDsUser.cpp:144][CSmDsUser::CSmDsUser][][][][][][][][][][][][][][][][][][][About to initialize User 'uid=zxcvbn,ou=people,ou=customer,dc=xxxxxxxx,dc=com' in dir 'psysadm customer group RAM bind'][][Start of call InitUser.]

[09/22/2016][11:06:07.550][11:06:07][28393][3998124944][SmDsLdapConnMgr.cpp:1180][][][][][][][][][][][][][][][][][][][][][][LogMessage:ERROR:[sm-Ldap-02230] Error# '32' during search: 'error: No such object matched dn: ou=people,ou=customer,dc=xxxxxxxx,dc=com' Search Query = 'objectclass=*']

[09/22/2016][11:06:07.550][11:06:07][28393][3998124944][SmDsLdapConnMgr.cpp:1191][CSmDsLdapConn::SearchExts][][][][][][][][][][][][][][][][][][][][][LDAP search of objectclass=* took 0 seconds and 2278 microseconds]

[09/22/2016][11:06:07.550][11:06:07][28393][3998124944][SmDsLdapProvider.cpp:2275][CSmDsLdapProvider::Search][][][][][][][][][][][][][No such object][][][][][][(Search) Base: 'uid=zxcvbn,ou=people,ou=customer,dc=xxxxxxxx,dc=com', Filter: 'objectclass=*'][][Ldap Search callout fails.]

Now, the user zxcvbn does not exist in the User Directory 'customer group bind'. In fact, the user does not exist in any user directories the customer has setup. Looking at the User Directory connection:

One can see the LDAP User DN Lookup defined as: UID=*,ou=people,ou=customer,dc=xxxxxxxx,dc=com

If you DO NOT use parentheses in the lookup start and end, then it is assumed that ALL of the users who will be authenticating have IDENTICAL DN's, except for whatever will be entered at the prompt. As a result, there is no search executed. Whatever the user types at the prompt will be sandwiched in between the lookup start and end, and this will be considered a valid DN to bind to the directory.

When a user lookup happens for this UD, it puts the userID typed in(zxcvbn) and searches for the DN:

'uid=zxcvbn,ou=people,ou=customer,dc=xxxxxxxx,dc=com'

If the DN lookup for above DN does not fetch any match, it will throw this error:

 (Search) Base: 'uid=zxcvbn,ou=people,ou=customer,dc=xxxxxxxx,dc=com', Filter: 'objectclass=*'][][Ldap Search callout fails.]

In the corresponding SMPS logs it will log an Error 32. Based on the above flow, this is a valid error we see in the logs as the user DN does not match the defined User DN lookup.

We found all the reported errors (with UTCB Policy servers) were for non-existent users. In previous versions of CA SSO, this error may not be reported. However, for R12 onwards we see this error reported along with the invalid cred error (error 49).

Solution: If you are seeing these errors consistently every minute then there might be a monitor/probe/script using old user creds which do not exist in the User Stores. Search for these connections and contact the team to update the credentials they are using. Alternatively, if the probe is no longer required, disable it. This should reduce the frequency of errors reported.

Workaround: As a workaround option, update the User Directory to have a different User DN Lookup such as:

(&(uid= *)(objectclass=*))

With the root still defined to 'ou=people,ou=customer,dc=xxxxxxxx,dc=com', error reports such as those above should be avoided.

 

 

Integrating CA API Gateway with Office 365

$
0
0

In this blog post, I will document the changes to integrate CA API Gateway with Office 365 tenant for Federation SSO. In this scenario, CA API Gateway is acting as an Identity Provider (IDP) and Office 365 tenant is acting as the Service Provider (SP). 


Here’s the high level follow for a typical SAML2.0 Federation SSO:

Process Steps:


1.    User goes to Office365 Tenant URL on the Browser.
2.    Since the User is not Authenticated yet, there is no user Office365 User Session. Office365 generates the SAML SSO Request and redirects the user to the CA API Gateway URL with the SAML Request.
3.    CA API Gateway validates the SAML Request and authenticates the user using Windows Integrated Authentication against the Windows Domain (Microsoft Active Directory)
4.    User Authentication against Microsoft Active Directory (AD)is successful
5.    If the User Authentication is successful, Gateway generates a SAML2.0 Token with the Attributes of the User from the Microsoft AD. Gateway then adds this SAML token in the message body with an Auto-Submit instruction to the Office365 Tenant URL and submits the request to the browser. Office365 Tenant now has the SAML Token for the authenticated user.
6.    Office365 Tenant validates the SAML Token, extracts the user attributes from the SAML Token and the maps the user attributes with the Office365 AD to create a User Session. Generated user session is then sent to the Browser. 
7.     Browser then automatically sends this user session for all the subsequent resource requests to the Office365 Tenant.

Below are the commands required to be entered on the Windows Azure Active Directory module of Windows PowerShell. I have added few reference links for the Tenant commands. 

 

 

$dom ='dummydomain.com'

$Brand = "Dummy Site"

$activeSO = "https://idpsite.com:443/sso/saml"

$PassiveSO = "https://idpsite.com:443/sso/saml"

$Issuer = "http://idpsite.com"

$cert = "MIIC9TCCAd2gAwIBAgIJAOjG+g1BNMLaMA0GCSqGSIb3DQEBDAUAMBgxFjAUBgNVBAMTDXNpZ24udGVzdC5jb20wHhcNMTYxMDIxMTkzOTIxWhcNMjExMDIwMTkzOTIxWjAYMRYwFAYDVQQDEw1zaWduLnRlc3QuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAmuJ6oUQItWDkF8Kf9wq6Q6zALvtvVdJKPz97eBSSp5IN+Hn6nPDbYGeb3DQEHxDLO23uUIEK6O4S/3mLBFJ8i+bHNk4J5ilKP0MJGrf1wIm1hD61YKoTFSBal9v5INkm6usXXWBibj1NKJBoazJKpNTx0OY1kgJ2qKPK9EMosCUiB/VxOONpajMmHgNyAEdiCWp8G6Y62bTxse73VsXDm/SASL4PeweIxApu/Sbr/eMw3v8oeyZFt3Fr1ovlz6nfLJ+R4m7OA5cTcaX7oeX+DuKLA7tJizxVPPLF9ROr+sN5ZmdohZMn6EbZssLv/+N6sP30MhnJVFn6P0yO66CTiwIDAQABo0IwQDAdBgNVHQ4EFgQUWRba0ivmwwWnFloObBoKSuNJ6UIwHwYDVR0jBBgwFoAUWRba0ivmwwWnFloObBoKSuNJ6UIwDQYJKoZIhvcNAQEMBQADggEBAIWr2uPNRd2vG7hts+BdAygUC+OsrwzIf/zLCnuDlu38GKJOf+7OhqYXoILcC8aNH+gWp+cv0pw5WhOiVPbMx/TNZ9yx0WUnbCouZdJOuyoS5thirEms0GsMRoBapkNYpBvPIZzJhvdcwUc2PJfCQ50tQL1L5+AnM72JEkyg8KkH5scrPtAG898GiDWbMFTnw4oktoB8+dl9VoZ6Clzdbz7oygPbOKyy/G05zvRQlcZHdF/y

gkey5fSDNUeP1Pvbav71ja6cB2WwjckS3ayyzEyYV0+6XloAthRA/taZhDhR2OLUw7tbVyiSCbIapQqAc7iYYl7hWwjSu7lV5ZlYih0="

 

Set-MsolDomainAuthentication -DomainName $dom -FederationBrandName $Brand -Authentication Federated -PassiveLogOnUri $PassiveSO -IssuerUri $Issuer -ActiveLogOnUri $activeSO -LogOffUri $PassiveSO -SigningCertificate $cert -PreferredAuthenticationProtocol Samlp

 

Make sure to substitute the following detail in these commands:


dom = Name of the Office365 domain
PassiveSO and activeSO = CA API Gateway IDP URL
Issuer = Issue value set by CA API Gateway in the SAML Response
cert = Public certificate of the certificate key pair used by CA API Gateway to sign the SAML response

Reference Links – 
https://msdn.microsoft.com/en-us/library/azure/dn641269.aspx 

Here’s the link to the Office365 SP Metadata URL – 
https://nexus.microsoftonline-p.com/federationmetadata/saml20/federationmetadata.xml

 

CA API Gateway Service:

Here are few of the key changes that we need to make in the CA API Gateway SAML IDP Service for it work successfully with Office365:

1)     ACS URL – Assertion 20, 58, 75,79 – We need to post the SAML Response to the Micorsoft URL - https://login.microsoftonline.com/login.srf. This value is derived from the Office365 SP Metadata attribute AssertionConsumerService.

2)     Audience – Assertion 21, 58 – Audience in the SAML response should be set to urn:federation:MicrosoftOnline. This value is derived from the Office365 SP Metadata attribute entityID.

3)     SamlRequest Issuer – Assertion 22, 47, 86 – We need to validate the SAML Request is sent by Office365 with Issuer value urn:federation:MicrosoftOnline

4)     Immutable ID – Assertion 39 -  We need to Query AD to get the Immutable ID of the authenticated user which needs to be sent in the SAML Response in NameID field. objectGUID attribute in the AD is the Immutable ID. This attribute has binary value. To query this value from AD, we need to append the LDAP attribute in the LDAP Query Assertion with a postfix “;binary”. See the below screenshot and the reference link from CA for extracting binary attribute. 

Link - http://www.ca.com/us/services-support/ca-support/ca-support-online/knowledge-base-articles.TEC1718944.html?intcmp=searchresultclick&resultnum=2

 

1)     SAML Request ID – Assertion 49, 51, 65, 76 – We need to extract the ID value from the SAML Request sent by Office365 to CA API Gateway. We need to send this ID value back to Office365 in the SAML Response in InResponseTo attribute of the samlp2:Response and saml2:SubjectConfirmationData elements.

2)     SAML Assertion Attribute – Assertion 58 – SAML Assertion should have an attribute by the name IDPEmail and value set to the User’s email id and Name Identifier set to the objectGuid extracted from AD in step 4 above.

3)     SessionIndex – Assertion 61 – We need to generate UUID and send in the SAML Response in the SessionIndex attribute of the saml2:AuthnStatement element.

4)     NameID Format – Assertion 63 – We need to update the format of the NameID attribute to urn:oasis:names:tc:SAML:2.0:nameid-format:persistent

5)     Remove SubjectLocality – Assertion 67 – We need to remove the element SubjectLocality from the SAML Response.

6)     SAML Assertion Sign – Assertion 71,73,74 – Since we need to update the SAML Assertion with the changes mentioned in the points 7 through 9, we cannot select “Sign Assertion” in the Create SAML Token Assertion. We need to sign the SAML Assertion using “(Non-SOAP) Sign XML element” assertion. This sign assertion has an option of adding the signature as first or last child of the signed element. However, for SAML Assertion to be accepted by Office 365, SAML Assertion should have the saml2:Issuer element followed by ds:Signature element. Assertion 73 and 74 are used to do this sequencing. This sequencing of the signature resolved the below SSO error generated on the Office365 Tenant:

Error from attempted sign in:

Additional technical information:

Correlation ID: 12529wg9w-27d6-2496-w12k-qwxu14xe05jd

Timestamp: 2016-10-11 14:43:19Z

AADSTS50000: There was an error issuing a token.

Sample SAML Request generated by Office365:

<samlp:AuthnRequest ID="_9eb6887f-8b57-4c58-9ef1-e9c88248cf0b" Version="2.0" IssueInstant="2016-10-20T14:17:44.590Z" xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol">
    <Issuer xmlns="urn:oasis:names:tc:SAML:2.0:assertion">urn:federation:MicrosoftOnline</Issuer>
    <samlp:NameIDPolicy Format="urn:oasis:names:tc:SAML:2.0:nameid-format:persistent"/>
</samlp:AuthnRequest>

 

Sample SAML Response:


<samlp2:Response InResponseTo="_9eb6887f-8b57-4c58-9ef1-e9c88248cf0b" Destination="https://login.microsoftonline.com/login.srf" ID="ResponseId_a31a330c4ba5c9cb3cc008174fa4c3a0" IssueInstant="2016-10-20T14:18:10.131Z" Version="2.0" xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:samlp2="urn:oasis:names:tc:SAML:2.0:protocol">
    <saml2:Issuer>http://idpsite.com</saml2:Issuer>
    <samlp2:Status>
        <samlp2:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>
    </samlp2:Status>
    <saml2:Assertion ID="SamlAssertion-7a5b9c01049a0575cf20ed7e6371c728" IssueInstant="2016-10-20T14:18:10.118Z" Version="2.0" xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion">
        <saml2:Issuer>http://idpsite.com</saml2:Issuer>
        <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
            <ds:SignedInfo>
                <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
                <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
                <ds:Reference URI="#SamlAssertion-7a5b9c01049a0575cf20ed7e6371c728">
                    <ds:Transforms>
                        <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
                    </ds:Transforms>
                    <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
                    <ds:DigestValue>osu7asdfe23Aw5c24Eaasdfas2vySr78=</ds:DigestValue>
                </ds:Reference>
            </ds:SignedInfo>
            <ds:SignatureValue>VagV/DDfrAk6CZZxwW/ ……+3Sd3H6AxlKZZS9sgwg7dJxBsw==</ds:SignatureValue>
            <ds:KeyInfo>
                <ds:X509Data>
                    <ds:X509Certificate>MIIC9TCCAd2gAwIBAgIJAOjG+ ……. Ac7iYYl7hWwjSu7lV5ZlYih0=</ds:X509Certificate>
                </ds:X509Data>
            </ds:KeyInfo>
        </ds:Signature>
        <saml2:Subject>
            <saml2:NameID Format="urn:oasis:names:tc:SAML:2.0:nameid-format:persistent" NameQualifier="">E2nqyveqmcEiqQTEMKGcCQ==</saml2:NameID>
            <saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">
                <saml2:SubjectConfirmationData InResponseTo="_9eb6887f-8b57-4c58-9ef1-e9c88248cf0b" NotOnOrAfter="2016-10-20T14:23:10.119Z" Recipient="https://login.microsoftonline.com/login.srf"/>
            </saml2:SubjectConfirmation>
        </saml2:Subject>
        <saml2:Conditions NotBefore="2016-10-20T14:13:10.119Z" NotOnOrAfter="2016-10-20T14:23:10.119Z">
            <saml2:AudienceRestriction>
                <saml2:Audience>urn:federation:MicrosoftOnline</saml2:Audience>
            </saml2:AudienceRestriction>
        </saml2:Conditions>
        <saml2:AttributeStatement>
            <saml2:Attribute Name="IDPEmail" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">
                <saml2:AttributeValue>testuser@dummydomain.com</saml2:AttributeValue>
            </saml2:Attribute>
        </saml2:AttributeStatement>
        <saml2:AuthnStatement AuthnInstant="2016-10-20T14:18:10.118Z" SessionIndex="f89db7e5-ae43-477f-aa47-ef96da4b4227">
            <saml2:AuthnContext>
                <saml2:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified</saml2:AuthnContextClassRef>
            </saml2:AuthnContext>
        </saml2:AuthnStatement>
    </saml2:Assertion>
</samlp2:Response>

 

See also:  SAML_IDP_ForOffice365.xml

Join us at CA World '16!

$
0
0

It's hard to believe that a year has passed and another CA World is upon us! If you'll be heading to Las Vegas the week of Nov. 14-18th, we'd love to have you stop by and see us at booth #1004 in the Security area. We will be offering demonstrations of our ToolBox for CA Single Sign-On and CoreBlox Token Service. You can also register to win a new iPad Mini!

CoreBlox President Todd Clayton will be presenting as part of Session SCT57S, "Enabling Your Identity Services", on Wednesday, Nov. 16th from 3pm-3:30pm PT in Security Theater 1. Stop by to learn how New York Community Bancorp (NCYB) deployed a centralized identity service based on virtualization and a federated architecture (RadiantOne FID) to support its CA Security infrastructure.

To schedule a meeting with us during CA World, contact us at info@coreblox.comor 1-877-TRY-BLOX(879-2569). We hope to see you in Vegas!


Bridging The Gap Between CA Single Sign-On and PingFederate

$
0
0
Photo credit: Ian McWilliams

Photo credit: Ian McWilliams

Let’s face it: some things just go well together. Chocolate & Peanut Butter; Coffee & Cream; Vinz Clortho & Zuul; PingFederate and CA Single Sign-On (formerly SiteMinder) ….  I’ll give you a moment to let that last one sink in. Traditionally organizations prefer to consolidate on a single vendor suite, however recently there has been a more accepting attitude toward combining different vendor technologies. That’s understandable given that there are so many great products out there each offering their own unique advantages.  The driving factor for these combinations are varied. Often it’s to get best of all technologies, sometimes it may be to fill a gap, and every now and then it might involve a transition between technologies.  Regardless of the driving factor, you need a plan in place if you want to successfully ‘bridge’ these technologies.  Success in this case could mean a seamless implementation with no noticeable impact to the end users.  In this example we will look at a common deployment which combines CA Single Sign-On, an industry leading access management solution, with PingFederate, a leading SAML Federation solution. 

Both products are great at what they do and maintain a large implementation footprint.  Both products provide a ‘single sign-on’ function, but with a different method of implementation.  Since both products have the ability to ‘authenticate’ users and act as an authoritative source, wouldn’t it be great if they could work together?  Great News!  They Can!

To facilitate this functionality, we need to be able to exchange information and trust between both platforms while maintaining their native processing.  This is accomplished through token exchange.  Specifically, through use of the CoreBlox Token Server (CTS) and the PingFederate CoreBlox Token Translator. 

The exchange of tokens facilitates a bi-directional flow of trust between both vendors, meaning that if a user has already authenticated with one service its token can be trusted and exchanged for a token from the other service. This is accomplished through the two components mentioned earlier. The CoreBlox Token Service is deployed within the Ping Jetty engine, or as a stand-alone instance. Depending on the use case, the Ping Adapter (IDP/SP) CTS will validate an incoming SMSESSION, redirect a user to authenticate to get an SMSESSION, or will create an SMSESSION based on information provided by the Ping Adapter.  Likewise, the Ping CTS Adapter works with CTS to provide the same functionality on its side, ensuring a new session is created based upon the trust with the CTS service.

In addition to basic trust and token exchange, there are also specific customizations which allow identifying attributes to be exchanged for additional authorization enforcement or content delivery bi-directionally.   

To sum up: whether you need on-going seamless sign-on between CA SSO and PingFederate OR just a temporary bridge between these products, look no further than the CoreBlox Token Service. Click here to download CTS for free and get started today! 

 

CA SSO: On-Premise or Cloud? Now you can do both!

$
0
0
Image courtesy of CA Technologies

Image courtesy of CA Technologies

 

The entire CoreBlox team was excited about the announcement that the latest CA Single Sign-On release (R12.6.02) includes integration with CA Identity Service!  I had a chance to see CA Identity Service in action at CA World last November. As impressive as it was as a standalone solution,  I think everyone knew that it would go to another level once it could work hand in hand with CA SSO. This announcement is a major step forward for CA, as companies that are already running CA SSO can now transition to a true hybrid cloud environment by pairing with CA IDaaS. Got Salesforce? Dropbox? Google G Suite? Office 365? No problem.  Your existing CA SSO users can now seamlessly transition from their on-prem apps to their cloud/SaaS apps with a single click.

Unofficial CA Single Sign-On Guide, Chapter 1: Ports!

$
0
0

One of the most common questions that comes up during CA Single Sign-On Professional Services engagements is: “What ports do I need to open for CA Single Sign-On?". This is generally followed by: “What does each port do?”. These are great questions and we wanted to consolidate the answers in one place. And so, without further ado, CoreBlox proudly presents our first chapter in our Unofficial CA Single Sign-On Guide: Ports!

When CA Single Sign-On is configured correctly, it just works and it works well! Sometimes getting through that initial configuration can be a bit like playing a game of Tetris, especially in an organization that relies on firewalls to control access to specific ports.

Below is a list of the default ports that are commonly associated with CA Single Sign-On implementations. By no means is this definitive, as configurations will vary between organization based upon requirements and standards. However, this is a good starting point when working with security and network teams during the installation and configuration of CA Single Sign-On.

Port # Use Open Between Comment
44441 Web Agent Accounting Port Web Agent / Policy Server Accounting Port
44442 Web Agent Authentication Port Web Agent / Policy Server * Required - Peforms Authentication Requests to Policy Server
44443 Web Agent Authorization Port Web Agent / Policy Server * Required - Peforms Authorization Requests to Policy Server
44444 Web Agent Administration Port Policy Server Not used by the WebAgent , Used by Policy Server for AdminUI
8080 AdminUI HTTP Browser / AdminUI Service Used for non-secure connection to the WAMUI console
8443 AdminUI HTTPS Browser / AdminUI Service Used for secure connection to the WAMUI console
8180 JBOSS Service Ports Browser / JBOSS Not used in normal operation
389 LDAP Policy Server / User-Policy Store Used for non-secure connection to an LDAP Sever
636 LDAP (Secure) Policy Server / User-Policy Store Used for secure-connection to an LDAP Server
1433 SQL Policy Server / User-Policy Store Used for communication with an SQL data source
44449 OneView Agent OneView Agent/ OneView Montor Used for communication between the OneView Agent and Montitor
44450 OneView Monitor Browser / OneView Monitor Port used by the OneView Montior
7680 Enhanced Assurance/Device DNA Access Gateway / Policy Server Used for Session Assurance Functionality
8080 Access Gateway ProxyUI Browser / ProxyUI Should not be installed on same server as AdminUI
543 Access Gateway ProxyUI Browser / AdminUI Service SSL for port for ProxyUI
8001 SMNP Agent SMNP Agent / SMNP Monitor Used if SMNP has been configured
161 SMNP Port SMNP Service Used if SMNP has been configured
80 HTTP Browser / WebAgent Standard Communication Port
443 HTTPS Browser / WebAgent Standard Communication Port

 

 

Extend CA Single Sign-On with Axiomatics!

$
0
0

Two decades in the Identity & Access Management space has exposed us to our fair share of “where did we go wrong?” scenarios - organizations that thought they were following best practices and ended up creating problems for themselves over time.  One especially problematic area has to do with role management and traditional RBAC (role-based access control). Often, organizations start off with the best intentions and establish just a few roles:

  • Admin
  • Employee
  • Customer
  • Partner

The roles become more granular over time:

Admin Employee Customer Partner
SuperAdmin Employee - HR Customer - Platinum Support Partner - Support
RegularAdmin Employee - IT Customer - Gold Support Partner - Implementation
LightAdmin Employee - Sales Customer - Trial Partner - Temp
AdminTemp Employee - Support Customer - Temp Partner - Marketing

Before you know it, that “handful” of roles you started with has expanded into a tangled web, creating an administrative burden and taxing the systems whose rules rely upon them. CoreBlox has seen environments that have over 15,000 roles! In the IAM industry this is generally referred to as the dreaded “role proliferation” (cue Darth Vadar theme).

Fortunately, there is a great alternative to RBAC. Our partner, Axiomatics, has pioneered the concept of Attribute-Based Access Control, also known as “ABAC”. The thought process behind ABAC is easy to understand: why create new data attributes to manage (e.g. Roles) when you can let the user data speak for itself?

Organizations that already use CA Single Sign-On for web access control have a distinct advantage when it comes to implementing an ABAC approach. The Axiomatics Extension for CA Single Sign-On allows policy decisions to be made by Axiomatics’ ABAC-based engine. A simple yes/no response is returned to CA SSO based upon the user’s attributes. It just works, no coding necessary!

Are you interested in exploring the benefits of ABAC for your organization? Download this new white paper: Making a Business Case for Attribute Based Access Control

Unofficial CA Single Sign-On Guide, Chapter 2: The Installation Debugger

$
0
0

(This is the second chapter in our new series, the Unofficial CA Single Sign-On Guide. You can find Chapter 1 here.)

I’m sure you’ve seen it! Whether it was on one of those tacky motivation posters or during a 3 a.m. Tony Robbins infomercial… the concept of "trust". It is usually demonstrated by somebody blindly falling backwards and trusting their partner or team to catch them. It looks convincing when you see it on television, but if you are like me you start wondering how many takes it took to make it look that easy. I believe it is part of human nature to want to ‘Trust’ but in the end we usually go with ‘Trust, but verify!’. That verification piece is especially important when it comes to your SSO solution!

If you have installed a CA security product in the past, you have no doubt seen one of the following conclusion messages: ‘Installation Successful’, ‘Installation Successful but with errors’ or ‘Installation Failed’.  Unfortunately, these messages are not always accurate. I have seen successful completions that were…. well…not successful. Other times it was successful with errors, but when you review the installation log there is little to no information in it.   So, what is one to do?

This brings us to the installation debugger. It is not in the manual, and often when I am on-site with a client they have no idea this function even exists but Yes, Virginia: there is a debugger!

Below are the methods for starting the debugger during Windows and Linux installations of CA Single Sign-On:

Windows

Running the debugger in Windows is very simple. Once you start the installer just hold down the [Ctrl] button during the initialization screen (see below) until you see a DOS box pop up in the background.  Once the DOS box has opened you can release the [Ctrl] button and continue with your install.   One important thing to note for Windows is that the DOS window will close once you have exited the installer so before you hit that final button to exit, be sure to select all the content of the DOS window and copy and paste to a text editor so that it can be saved for reference.   

Initialization Screen - Hold down the [Ctrl] button until you see the screen below then release the control button.

Initialization Screen - Hold down the [Ctrl] button until you see the screen below then release the control button.

You know the debugger has started once you see this DOS window pop-up in the background.

You know the debugger has started once you see this DOS window pop-up in the background.

 Linux

Unlike Windows, running the debugger in Linux will automatically write the content to a log file. 

Before running the installation script, enter the following command (note this command could vary slightly depending on the shell in use)

export LAX_DEBUG=true

 Then start the installer script as you normally would.

Running the debugger during the installation will not ‘fix’ a potential problem, but it may provide some specific information (or errors if you are lucky) to assist you with finding the source of the problem so that you can resolve it.

 

Creating a ToolBox for the Modern Software Factory

$
0
0
bigstock-Toolbox-81236894.jpg

If you’ve recently visited ca.com then you’re probably aware of CA Technologies' focus on the evolving needs of the enterprise as it builds the “Modern Software Factory”.  At CA World 2016, CEO Michael Gregoire used his to keynote to discuss companies that are built to change. Otto Berkes' keynote described what a Modern Software Factory is and why enterprises need to streamline innovation so that ideas can turn into new customer experiences quickly and efficiently.

He identified 5 key principles of a Modern Software Factory:

  1. Agility
  2. Experience
  3. Automation
  4. Security
  5. Insight

    It was a fresh perspective on the challenges our customers face and how to meet them. I recently found myself reflecting on how CoreBlox, a CA Focus Partner, is already aligned with the vision for the Modern Software Factory. Many IAM industry people know of our architecture and services delivery capabilities, but we are also a software company. Our CoreBlox Token Service allows CA Single Sign-On to securely exchange tokens with PingFederate, an increasingly common need within large organizations that have security solutions from multiple vendors. Our ToolBox for CA Single Sign-On automates and streamlines common CA SSO administrative tasks while increasing overall security and easing regulatory compliance. Developing, refining and supporting these products has given us a taste of what it's like to run our own Modern Software Factory. But how do they contribute to our clients' own ability to adapt to an ever changing market?

    Here is a breakdown of how ToolBox for CA Single Sign-On embodies the essence of the Modern Software Factory:

    • ToolBox allows you to be Agile in your daily security management practices. It enables you to easily promote SSO policies across environments and seamlessly onboard new applications.
    • ToolBox helps to drive ever evolving user Experiences. Companies that are releasing new applications and on boarding new users daily need to be able to control access by defining new policies and updating existing ones. ToolBox centralizes the management of these policies across environments so that the user experience is consistent and predictable.
    • ToolBox is the Automation engine for CA Single Sign-On. Its intuitive user interface makes most of your common administrative tasks as simple as pushing a button. ToolBox's template-based approach makes it easy to re-use configurations that have already been created. 
    • ToolBox was designed to bring Security to your CA Single Sign-On operations. With ToolBox, you'll be able to delegate administrative functions and precisely control user access across environments. Simplified policy testing allows you to eliminate errors that cause unintended vulnerabilities. With all of your environment changes audited, compliance requirements are easy to fulfill.
    • ToolBox delivers Insights into how your security policies are being configured and the subtle differences between your environments that could impact user experiences. Its optimization functions highlight subtle configuration tweaks that can improve performance and allow CA Single Sign-On to grow and change along with your business.

    CoreBlox is committed to building products and solutions for the Modern Software Factory while incorporating its key principles into our own day to day experiences as a software company. We're excited to be aligned with CA Technologies on this quest! 

    Virtualize SailPoint IdentityIQ's Database with RadiantOne!

    $
    0
    0

    CoreBlox Senior Architect Anthony Hammonds recently participated in our partner Radiant Logic's webinar focused on how to virtualize SailPoint IdentityIQ's database with RadiantOne such that it can be easily extended for use with LDAP applications, WAM systems, and Federation. The webinar playback and presentation can be found on Radiant Logic's web site:

    http://www.radiantlogic.com/learning-center/events/webinars/y2017/webinar-612017/

    As always, please contact us if you have any questions about Radiant Logic or SailPoint solutions!


    CA Access Gateway Install Error: "JRE libraries are missing or not compatible"

    $
    0
    0
    bigstock-New-Version-Software-Install-H-127067990.jpg

    We ran into a problem during a recent installation of CA Access Gateway 12.6 (formerly known as CA Secure Proxy Server) on Red Hat Linux, and would like to share the solution. 

    Upon launching the installer, the following error was displayed:  "JRE libraries are missing or not compatible..."

    access_gateway_install_error.png

    This may have to do with insufficient permissions in the /tmp directory.  In environments where obtaining the required permissions may not be straightforward due to how the server is locked down, security policies, etc., there is a simple workaround.

    You need to create a new "temp" directory in a location where you do have the proper permissions (for example, /opt/myapplication/tmp), and then set an environment variable called "IATEMPDIR".  Example:

    mkdir /opt/myapplication/tmp

    export IATEMPDIR=/opt/myapplication/tmp

    You should not be able to successfully launch the installer without encountering the "JRE libraries are missing or not compatible" error.

    Good luck!

    Identity as a Microservice

    $
    0
    0
    header.png

    Overview

    Microservices allow applications to be created using a collection of loosely coupled services. The services are fine-grained and lightweight. This improves modularity and enables flexibility during the development phase of the application, making the application easier to understand. When designing applications, identity becomes a key factor to building out a personalized user experience. Identity also enables other microservices for tasks like authorization applications like Axiomatics, single sign-on, identity management and compliance.

    However, access to profile data presents a challenge since it is contained across multiple repositories, contained in other applications or even must be consumed from other microservices. The Identity Microservice must be able to not only respond to requests through a standard protocol for identity information, but must also have the means to reach out to these identity repositories in an efficient and responsive manner. The Identity Microservice must also allow for both user-driven and server-to-server access to identity data.

    The following diagram breaks down the components of the Identity Microservice:

    Identity Microservice.png

    The Identity Microservice at its core is made up of four layers:

    1. oAuth Authorization Server
    2. OpenID Connect UserInfo Endpoint
    3. Federated Identity Service from applications like Radiant Logic’s FID
    4. The server and web application clients of the Identity Microservice

    Each of these layers performs a crucial role in securing access to identity data and also allows the microservice to obtain identity data from the required repositories. Breaking this down further:

    • The oAuth Authorization Server provides secure access to the Identity Microservices
    • The UserInfo Endpoint handles the requests for identity data and returns the requested profile information (claims)
    • The Federated Identity Service provides a centralized hub for obtaining application-specific profile data from directories, applications, databases and other microservices
    • Additionally, the Federated Identity Service needs to be able to aggregate and correlate profile data and leverage a real-time cache to ensure that access to profile data performs quickly and within the required application service levels

    Today, the Identity Microservice’s components are based upon open standards and are both lightweight and highly leveraged by web applications and servers.

    There are two main client flows supported by the microservice:

    1. User-driven Web Application flow
    2. Server-driven flow

    Each of these flows require a different means of interacting with the Identity Microservice.

    User-Driven Web Application Flow

    Identity is at the core of nearly all web applications - everything from the initial authentication and authorization through to personalization with profile data. When logging into your banking application you not only need to securely identify you as the user, but also must authorize access to your accounts and personalize the site for your profile. Would you trust a banking application that listed your identity as “User”?

    The following diagram breaks down the user-driven Web Application flow:

    User Driven.png

    1. User accesses the Web Application
    2. The Web Application redirects user to the Identity Microservice’s Authorization Server with a client ID and application scope
    3. User authenticates and authorizes request
    4. Authorization Server redirects user back to the Web Application with an authorization code
    5. The Web Application sends the authorization code to the Authorization Server with its client secret
    6. The Authorization Server returns an access token and ID token
    7. The Web Application sends the access token to the Identity Microservice’s UserInfo endpoint
    8. The Identity Microservice’s Federated Identity Service matches the application scope to the defined view and returns requested attributes
    9. The Authorization Server returns the requested user information (claims) from the UserInfo endpoint to the Web Application

    There are several key factors in this flow:

    1. The scope sent to the Identity Microservice is the application, or view, for the requested profile data
    2. The view defined in the Federated Identity Service is application-specific and can be limited to just the profile data needed for the authorized application
    3. Multiple application-specific views can be supported by the Identity Microservice
    4. Authentication can be easily mapped back to the user’s profile repository by the Federated Identity Service allowing client web applications to completely delegate authentication to the microservice

     

    Server-Driven Flow

    While similar to the user-driven Web Application flow, no user interaction is present for this transaction. The Server-driven flow allows for backend access to profile data. In this case, the server is being authenticated and not the user.

    The following diagram breaks down the Server-driven flow:

    Server Driven.png

    1. Server sends client credentials and application scope to the Authorization Server
    2. Authorization Server returns an access token and ID token
    3. Server sends the access token to the UserInfo endpoint
    4. Federated Identity Service matches the application scope to the defined view and returns requested attributes
    5. Authorization Server returns the requested user information (claims) from the UserInfo endpoint to the Server

    This allows the server to access the same profile data as defined for a Web Application. Additionally, the same views in the Federated Identity Service can be leveraged, if desired, for both Servers and Web Applications.

    The Identity Microservice allows for powerful, yet lightweight access to all the needed profile data in an efficient manner. This microservice can provide what is needed at the core of all applications, and for the Server-driven flow can even be used for transaction-specific data unrelated to users. As the world moves toward the model of easily consumable services, the Identity Microservice must be one of the main considerations when designing an application.

    Permalink

    Post Auto-Registration Issue with CA API Developer Portal (Saas) Integrated with CA API Gateway (On-Premise)

    $
    0
    0
    cloud-computing-2001090_1920.jpg

    Problem Description:

    After registering the SaaS CA API Developer Portal, the applications created on the developer portal cannot be synced with the CA API Gateway OTK database.

    Solution:

    1- Check your JDBC connection configured for OTK. When you installed your OTK onto the CA API Gateway, you were asked to configure a JDBC connection for your OTK persistence layer if you choose to use SQL databases. By default, you should use “OAuth” for that JDBC connection, but in many cases, the JDBC name can be set to anything. This will not give you problems with OTK but when you register with SaaS Developer Portal, the auto registration will create services and encapsulated assertions that contain a JDBC Query and those query assertions are mapped to JDBC connection by name which is “OAuth”. If you have a different name for your OTK JDBC Connection, those JDBC queries will fail.

    To fix the issue, you need to update the JDBC query assertions in the following services:

    Portal Application Sync Fragment

    Portal Application Sync Fragment

    Change default connection OAuth to the connection you configured for your OTK.

    2- If you are using a dual API Gateway configuration and you Installed OTK onto both DMZ and INT gateways, after you register your DMZ API Gateway to the Developer Portal, the Application will show that it is out of sync. This is because the DMZ Gateway should not have the OTK Database configured. For most of the steps in deploying an application from the SaaS Portal to the API Gateway, the request can be handled by built-in OTK assertions where it will route the DB query requests to the INT gateway. The INT Gateway then queries the OTK DB. Unfortunately, a simple error in the portal sync service breaks the flow.

    In Portal Application Sync Fragment, the direct JDBC query will return API key count value in a context variable called: ${apiKeyCount.count}, meanwhile the OTK assertion will return the API Key count value in ${apiKeyCount}. The following policy will refer to ${apiKeyCount.count} for the API Key count value. Therefore, when trying to sync an Application from the DMZ Gateway, the OTK Assertion is used and returns the value in the wrong context variable.

    To fix this issue, simply add a context variable after the OTK Assertion to assign the value of ${apiKeyCount} to ${apiKeyCount.count}.

    Adding a Context Variable After the OTK Assertion

    Adding a Context Variable After the OTK Assertion

    3- If you are using the Cassandra Database for OTK Token store, you need to upgrade your CA API Gateway to v9.4 or up and OTK to v4.3 or up. Otherwise, it will not support integration with SaaS CA API Developer Portal. Only OTK v4.3 or up will have the updated database schema to be able to store API Key information and API Access information, which is required when creating an application from the SaaS CA API Developer Portal.

    If your current OTK version is 3.x, then you need to manually uninstall and re-install your OTK to be upgraded to OTK v4.3. If your current OTK version is 4.x, then you can use the upgrade button to upgrade your OTK to v4.3. Unfortunately, due to some defects with the OTK, some manual configuration after the auto upgrade is required. Please check my other blog post entitled “Layer 7 Gateway OTK Upgrade” for details.

    After the upgrade, the SaaS CA API Developer Portal will still not work properly due to a defect in the current Portal Application Sync Policy Fragment. In this fragment, it will try to make a JDBC query first and if it fails then it will rely on OTK Assertion to make the NoSQL query to the Cassandra DB. Unfortunately, the OTK Assertion returns the request in the wrong context variable, which breaks the workflow.

    The fix is simple - add a context variable to assign the value of ${apiKeyCount} to variable ${apiKeyCount.count}

    Add a Context Variable

    Add a Context Variable

    4- To avoid having the SaaS CA API Developer Portal push data to the CA API Gateway and break the API Gateway runtime traffic, the communication between the CA API Gateway and SaaS CA API Developer Portal occurs by having the CA API Gateway pull information from the CA API Developer Portal. Therefore, to sync any configuration or modifications from the Portal to the Gateway, it requires the API Gateway to make outbound calls.

    In most enterprise environments, outbound calls usually require a secure proxy, otherwise it will be blocked by the firewall. Here are a few things we need to know about the proxy configuration:

    a- You need to configure a global proxy for the registration URL to work and you can disable/delete that global URL after registration. However, you need to enable/add the global proxy again when you run Portal Upgrade Tasks.

    CA API Gateway - Policy Manager

    CA API Gateway - Policy Manager

    b- The outbound proxy will only work with “Automatic” and “Scripted” deployment. “On-Demand” deployment type is NOT supported for proxy settings. Because for “On-Demand” there is a portal deployer module running on the background to sync APIs from SaaS Portal to API Gateway. That module is not configurable by API Gateway admins and it will make an websocket call to SaaS Portal where proxy setting cannot be added.

    Add API Proxy

    Add API Proxy

    c- You need to update every routing assertion inside Portal services to manually add a proxy configuration. Here is a list of service you need to change and add proxy to:

    Move Metrics Data Off Box

    Portal Application Sync Fragment

    Portal Bulk Sync Application

    Portal Check Bundle Version

    Portal Delete Entities

    Portal Sync Account Plan (Two routings needs to be edited)

    Portal Sync API (Two routings needs to be edited)

    Portal Sync API Plan (Two routings needs to be edited)

    Portal Sync Application

    Portal Sync Fragment

    Portal Tenant Sync Policy Template

    API Portal SSO SAML Validation Service Fragment

    Portal Sync SSO

                For example:

    HTTP Routing Properties

    HTTP Routing Properties

    Create a dependency check service on Layer7 API Gateway by using restman service

    $
    0
    0
    Apisssss.gif

    Issue: 

    Some of the API Gateway components like Identity Provider, Encapsulated Assertions, Policy Fragments, cluster-wide properties, stored passwords, private keys, etc. are consumed by many services configured on the gateway. When you try to edit any of these components, it is difficult to tell which APIs such changes will affect. Therefore, it is usually difficult to make changes to those shared components. If we can find out which APIs have a dependency on the target component, that can help you come up with a deployment plan and even design the changes to accommodate all affected APIs.

    Solution:

    The restman services on API Gateway provide search for dependencies of an API. This will give a list of shared components like Encapsulated Assertions, Identity Providers, etc. that this one API is depending on. However, it does not offer the search to find out which APIs are depending on certain shared components. This XML file is a dependency check service allowing you to identify which APIs are depending on certain shared components. This service utilizes restman service - it calls restman service in a most efficient way to get the result you need. Also, to avoid consuming too many resources on the gateway by calling restman service, you can choose to cache the result to avoid unnecessary duplicate searches. Optionally, you can choose to protect the service, rate and quota limit, and time availability.  

    How to Deploy Gateway Dependency Check Service:

    1.    Publish Restman service on gateway

    Publish Internal Service

    Publish_Internal_Service.png

    Choose Gateway REST Management Service and publish

    Gateway_REST_Management_Service.png

    2.    Create an empty REST API on gateway

    Service Name: Gateway Dependency Check Service

    Gateway URL: https://<YOUR_GATEWAY_HOST>:<YOUR_GATEWAY_PORT>/dependency/check

    Publish_Web_API_Wizard.png

    3.    Import attached XML file to your REST API

    Import_XML.png

    4.    If your gateway restman service is available via different Hostname and port configuration, update the restmanHost and restmanPort context variables in the service. These variables are defined in “Init” folder.

    restman_variables.png

    5.    If you have certain folders that you want to bypass in the search by default, you can add a cluster-wide property with Folder IDs separated by spaces.

    bypass_folders.png


    This completes your deployment of Dependency Check service


    How to use Gateway Dependency Check Service:

    1.    Open a browser and hit the following URL:

    https://<YOUR_GATEWAY_HOST>:<YOUR_GATEWAY_PORT>/dependency/check?targetName=<NAME_OF_YOUR_TARGET_COMPONENT>

    Example: https://test.example.com:8443/dependency/check?targetName=Internal Identity Provider

    Provide API Gateway Admin credentials to access the service. 

    2.    Optional Query Parameters:

    Parameter Name Parameter Value Description
    targetName Name of target components This is a required parameter. Put the name of the components that you wish to check dependencies in a comma separated list. This is not case sensitive.
    refresh true/false
    Default: false
    By default, the search result will be cached for 5 min. You can force to refresh it by set it to true.
    overwriteQuota true/false
    Default: false
    By default, the service allows 10 calls per day to protect gateway itself. You can disable quota check by set it to true
    overwriteAvailability true/false
    Default: false
    By default, the service can only be called during off hours to avoid affecting production traffic. (9 pm – 6 am local time) This can be disabled by set it to true
    addToBlacklist FolderIDs By default, the service will pick up the bypass folders from cluster wide property, but you can also remove and folders from default blacklist dynamically with this parameter.
    RemoveFromBlacklist FolderIDs By default, the service will pick up the bypass folders from cluster wide property, but you can also remove and folders from default blacklist dynamically with this parameter.
    overwriteBlacklist FolderIDs By default, the service will pick up the bypass folders from cluster wide property, but you can also replace that list dynamically with this parameter.

    3.    Example Result

    Search_Result.png

     

    Preparing CTS for PingFederate 10.x

    $
    0
    0
    slide-14_chrome-logo-100748748-large.jpg

    PingFederate 10.x supports the new Google Chrome changes for SameSite cookies. Using the CoreBlox Token Adapter to exchange tokens with Broadcom/Layer 7 SiteMinder will require some configuration changes. Specifically:

    1. Within the CoreBlox Token Adapter (provided by Ping Identity):

      1. “Secure Cookies” must be ENABLED

      2. “HTTPonly” must be ENABLED

    2. If you intend to pass SameSite cookies to SiteMinder, you must ensure that you have patched your SiteMinder Web Agents so that they will respect the new ACO parameter that applies a SameSite fix. A description of the solution can be found here.

    For more information about PingFederate 10.x, please contact Ping Support.

    Viewing all 53 articles
    Browse latest View live