• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
site logo
  • About
    • Approach
    • Partnerships
    • Mission
    • Leadership
    • Awards
    • Arraya Cares
  • Solutions
    • Solutions

    • Hybrid Infrastructure
      • Hyperconverged
      • Infrastructure as a Service
      • Servers, Storage, and Virtualization
      • Data Protection
      • Disaster Recovery & Business Continuity
    • Apps & Data
      • AI
      • Automation
      • Customizations
      • Visualizations & Integrations
      • Migrations
    • Network
      • Enterprise Networks
      • Wireless Connectivity
      • Cloud Networking Solutions
      • IoT
    • Cybersecurity
      • Endpoint
      • Network
      • Cloud
      • Application
    • Modern Workplace
      • Microsoft Licensing
      • Productivity & Collaboration
      • Modern Endpoint Deployment & Management
      • Microsoft Compliance & Risk
      • Backup
      • Cloud
  • Services
    • Services

    • Managed Services
      • Service Desk
      • Outsourced IT
      • Managed Security
      • Managed NOC
      • Arraya Adaptive Management for Microsoft Technologies
      • ADEPT: Arraya's White Label Program
    • Advisory Services
      • Assessments
      • Strategy
      • vCTO
      • vCISO
      • Enterprise Architecture
    • Staffing
      • Infrastructure Engineering
      • Security & Compliance
      • Application & Software
    • Professional Services
      • Project Management 
      • Systems Integration 
      • Mergers & Acquisitions
      • Knowledge & Skills Transfer 
  • Industries
    • Education
    • Finance
    • Healthcare
    • Legal
    • Manufacturing
    • Software and Services
  • Insights
    • News
    • Blog
    • Events
    • Videos
    • Case studies
  • Careers
  • CSP Login
search icon
Contact Us

Arraya Insights

February 28, 2019 by Arraya Insights

Dell EMC just released a software update for its Unity line of midrange storage offerings, one that promises to put an array of new features into the hands of data centerDell EMC Unity 4.5 upgrade admins. Far from just adding new bells and whistles, Dell EMC’s Unity 4.5 OE looks to make data storage, and data management, more efficient. In order to gain some expert insights into this new update, we sat down with members of Arraya’s Data Center Practice. Our team identified four key changes that could have organizations strongly considering a jump to 4.5.

Advanced Deduplication

Perhaps the most important new feature delivered by Dell EMC Unity’s 4.5 OE is advanced deduplication. This optional feature builds on Unity’s preexisting data reduction capabilities to further reduce the amount of storage space devoted to reoccurring data sets. Dell EMC estimates that when activated, advanced deduplication can triple Unity’s data reduction performance.

Here’s how advanced deduplication fits into the process. When a data set is first entered into Unity, it goes through an initial deduplication algorithm. If the algorithm detects no patterns in 4.5, the data then goes to advanced deduplication for further analysis. This separate algorithm translates data blocks into fingerprints to uncover patterns the first analysis missed. Should it find any, it will remove the offending data blocks and insert corresponding reference points. Advanced deduplication can analyze data at the LUN, File System, or Data Store level.

Metrosync Manager

Advanced deduplication is certainly a big deal, however, it’s not the only new feature in Unity’s 4.5 OE worth highlighting. Another new tool is MetroSync Manager. This application is all about ensuring resource uptime and availability during a disaster scenario without increasing the pressure on onsite technology personnel.

MetroSync Manager can monitor both sides of a synchronous replication relationship. In the event that a catastrophic incident – say, a power outage – takes one of those environments offline, MetroSync Manager can initiate automatic failover to ensure business continuity. Take MetroSync Manager out of the equation and failover would still be possible. It would, however, have to be done manually using Unity’s Cabinet Level Failover process. This delays recovery, not only as the admin works through the process, but in terms of the time it takes to discover the problem. MetroSync Manager continually checks the status of each environment and can respond as soon as an incident occurs to keep the impact to a minimum. Additionally, failover can take place regardless of whether replication occurs in a one-direction or bi-directional configuration.

File-Level Retention

Also coming to the 4.5 version of Unity’s software is file-level retention. All businesses are increasingly beholden to complex external (or internal) regulations concerning when they can and can’t let go of stored data. File-level retention, which is automatically enabled in version 4.5, can support admin compliance efforts.

Simply put, file-level retention lets admins define sets of files or directories as unalterable before a given date. These files, dubbed WORM (Write-Once, Read-Many) files, are safe from purposeful modification, accidental deletion, or any other changes that could land an organization in hot water with regulators. Unity’s file-level retention comes in two flavors. The first, called file-level retention enterprise (FLR-E) uses NAS protocols to prevent users from modifying protected data. However, it won’t prevent file system deletion performed by admin-level accounts. The other method, file-level retention compliance (FLR-C), is more complex and is intended for organizations subject to federal regulations.

Virtual Storage Appliance (VSA) Professional Edition

There’s one final feature included as part of Unity’s 4.5 OE that our Data Center team wanted to point out. By way of built-in integration with Dell EMC Unity’s Virtual Storage Application (VSA) Professional Edition, this update will also grant organizations access to software-defined storage backed by high availability functionality.

Using VMware’s ESXi platform as its foundation, Dell EMC Unity VSA can serve as a flexible storage solution in situations where a dedicated offering would far surpass actual needs, e.g., test sites or branch locations. Admins can spin up a Dell EMC VSA using general hardware, letting them react to demands quickly without incurring the financial and time investment necessitated by incorporating new physical infrastructure into a data center. Further flexibility comes in during the licensing phase as Dell EMC UnityVSA comes in either 10TB, 25TB, or 50TB capacity versions.

Next Steps: Upgrading to a more efficient data center with Dell EMC Unity

Want to dive deeper into these features? Ready to update your Dell EMC Unity to version 4.5 as you build a more efficient, modern data center? Arraya Solutions’ Data Center team is here to help. Our expert resources can guide your organization through every step of the process. Visit https://www.arrayasolutions.com//contact-us/ to start a conversation with them today.

As always, you can leave us a comment on these or any of our blogs through social media. Arraya can be found on LinkedIn, Twitter, and Facebook. Share your thoughts then follow us to stay updated on our industry insights and unique IT learning opportunities.

 

February 26, 2019 by Arraya Insights

Global IT spend is projected to hit roughly $3.8 trillion this year, an increase of 3.2% from 2018’s total according to research conducted by Gartner. PerhapsOptimize Cloud Spend in 2019 unsurprisingly, one of the principle forces driving this growth is the cloud. Spend on enterprise software solutions, which includes cloud technology, is projected to grow by 8.5% from last year. Stripping away the rest of that bucket and looking exclusively at cloud spend, Gartner predicts a growth rate of 17.5%. However, there is a substantial amount of risk lurking just beneath the surface of all of that cloud positivity.

Separate research, performed by Park My Cloud, builds off Gartner’s figures to determine just how much of what organizations are investing in the cloud actually goes to waste. The end result of their calculations? A cool $14.1 billion dollars’ worth of cloud spend could prove to be ultimately for naught.

The mindset moving forward for organizations shouldn’t necessarily be to look for opportunities to spend more on the cloud, but rather for ways to spend smarter on the cloud.

Two culprits responsible for increasing cloud waste

The easiest way for organizations to begin optimizing cloud spend would be to avoid two issues raised in Park My Cloud’s examination. These are:

  • Overprovisioning cloud environments.
    Let’s pull in one more data set. RightScale believes 40% of cloud instances are one size too big. One size can mean a lot in terms of over inflating cloud spend. This might be partly due to residual approaches left over from managing legacy, onsite IT environments. When dealing with physical infrastructure, it makes sense to leave wiggle room to account for unexpected spikes in demand. However, the cloud allows for more intuitive growth, meaning businesses can adjust their usage on the fly as opposed to paying for more than they need all year long.
  • Spending on idle time.
    Cloud promises around the clock availability, but not every system or data set necessitates that. Obviously, uptime and availability are key characteristics of production environments that live in the cloud. But, what about cloud environments used for pilot testing or as development sandboxes? These environments don’t need to stay live 24/7 and yet many businesses are paying for exactly that. Audits can provide rock-solid figures regarding usage that organizations should then take back to their cloud provider to jumpstart a conversation around optimizing their cloud environment through scheduling uptime or some other means.

Next steps: Let Arraya help you optimize your cloud spend

Is your organization among those planning to increase cloud spend in 2019? Want to optimize those investments to ensure the greatest possible return? Arraya’s Cloud Optimize service can help. This service brings together a unique combination of assessment, remediation, and management solutions in order to foster a holistic cloud strategy, one that takes an application-centric approach to determining the best home for critical workloads.

If you’d like to learn more about Arraya Cloud Optimize Service or how to make sure your organization achieves full value from cloud investments, visit https://www.arrayasolutions.com//contact-us/ to start a conversation with our team of cloud experts.

As always, feel free to leave us a comment on this or any of our blogs on social media. Arraya can be found on LinkedIn, Twitter, and Facebook. Remember to follow us to stay up to date on our industry insights and unique IT learning opportunities.

February 21, 2019 by Arraya Insights

Cisco recently released version 6.3, the latest iteration of software powering its Firepower family of cyber security solutions. Included as part of this update are Cisco Firepower 6.3 release several features that have long sat atop the wish lists of Cisco security shops. We caught up with members of our Network and Security team to learn more about what’s new with Firepower version 6.3 and what these changes could mean for customers.

Multi-instance for Firepower

One of the headline features that the 6.3 release brings to Firepower, specifically to Firepower 4100/9300 w/ Firepower Threat Defense (FTD), is multi-instance. Previously, admins could deploy a lone instance of FTD on a given security utility. As a result of this update, however, admins can now spin up multiple virtual appliances per security device. Each such appliance has its own FTD container and admins can customize it independently.

This revamped security architecture can support organizations in their pursuit of two constant data center objectives. By using this approach, admins can deliver a data center that is both highly available and also flexible enough to scale alongside organizational demands.

Two-factor authentication

Another new feature ushered in by version 6.3 is two-factor authentication for FTD. Remote users connecting via a VPN can now take advantage of the extra security of two factor authentication. The initial factor in the authentication process can be any RADIUS or LDAP/AD server. Secondary validation can occur through either an RSA token or from a DUO passcode sent out to a user’s internet-connected device.

Leveraging two-factor authentication can ensure users have the flexibility they want to work remotely without opening the organization up to any unnecessary risks.

Local authentication for VPN users

On the subject of authentication, version 6.3 brings additional verification capabilities to the table. So, admins are able to create users by way of Firepower Device Manager. They can use this locally-hosted account database to authenticate access requests coming in through a remote VPN connection. In this type of arrangement, that local cache of accounts can serve as either the primary or fallback verification method.

Of course, proximity is no longer a prerequisite for an attack. Given the ever-increasing threat posed by far-off malicious actors, it’s critical for organizations to take whatever steps necessary to tightly manage remote access to sensitive data.

Next Steps: Is Cisco’s Firepower 6.3 release the right fit for you?

These are just a few of the changes the 6.3 version of Firepower software can bring to organizational security postures. Want to learn more about what else it has in store? Thinking about upgrading your existing Firepower deployment or bringing the solutions to your company for the first time? Arraya’s Network and Security teams can help. Our experts are available to help you assess your current security environment and address any gaps with solutions and strategies designed to fit your individual needs. Visit https://www.arrayasolutions.com//contact-us/ today to start a conversation.

As always, feel free to leave us a comment on this or any of our blogs through social media. Arraya can be found on LinkedIn, Twitter, and Facebook. Furthermore, remember to follow us to stay up to date on our industry insights and unique IT learning opportunities.

February 18, 2019 by Arraya Insights

Estimates vary as to how many organizations globally consider themselves compliant with the European Union’s General Data Protection Regulation (GDPR). One thing is for sure,Cisco GDPR compliance study organizations who have yet to cross that line have plenty of motivation to do so soon. Just last month, Google became the first major tech company dinged under GDPR. The CNIL, France’s independent data privacy regulatory body, hit Google with a roughly $57 million fine for failing to keep customers informed about how their data is used or provide sufficient clarity into the company’s data consent policies. When it comes to achieving GDPR compliance, however, the benefits go beyond avoiding fines.

In the first entry of its 2019 Cyber Security Series – entitled Maximizing the value of your data privacy investments – Cisco argued data privacy spend has paid off in numerous, and even unexpected, ways. These perks are not unique to GDPR compliance. They are a byproduct of investing in the people, processes and tools needed for smarter, more secure data stores.

Here are three of the more surprising ways in which organizations have benefited from their data privacy spend.

Benefit #1: Shorter sales cycles

Maybe it’s the steady march of high-profile data breaches, but customers appear to be honing in on security. In Cisco’s study, almost 9-in-10 (87%) participants reported experiencing sales delays stemming from customer data privacy concerns. In the 2017 version of the study, just 66% of organizations reported that same hesitation.

Here’s the thing, organizations able to demonstrate a higher degree of GDPR preparedness actually experienced shorter delays. Those currently ready for GDPR saw delays of 3.4 weeks. Among organizations roughly a year out from GDPR-readiness, delays went up to 4.5 weeks. For those more than a year away? Try an average of 5.4 weeks.

Product or service quality will always be important to the sales process. Still, it clearly doesn’t hurt to be able to quickly demonstrate a data privacy-centric mindset.

Benefit #2: Lower impact security incidents

As far as data breaches go, there was good news in 2018 and there was bad news. On the positive side of things, the total number of breaches decreased by 23% last year according to the Identity Theft Resource Center. Now for the bad news: Attackers managed to steal 447 million total consumer records in 2018, an increase of 126%. So, even though the bad guys won less, when they did, they won big.

Cisco’s research also looked at the impact of GDPR preparedness on incident severity. It found organizations that consider themselves GDPR-ready reported having an average of 79,000 records impacted by a data breach. Compare that to 100,000 for organizations less than a year out and 212,000 for companies more than a year away.

Furthermore, GDPR-ready companies suffered an average of 6.4 weeks of downtime due to incidents and just 37% of those organizations faced a loss of $500K or more. In both instances, those figures increase dramatically as GDPR-readiness decreases. Businesses more than a year away saw an average of 9.4 weeks of downtime and 64% faced a loss equal to or greater than $500K.

As Tom Clerici, our Cyber Security Practice Director, likes to point out, compliance and security don’t always travel hand-in-hand. That doesn’t mean they’re total strangers either. An increased awareness of – and willingness to invest in – security concerns can pay off.

Benefit #3: Fewer data breaches overall

There’s no such thing as a cyber security silver bullet. Even organizations that make all the right moves can have their efforts undone by a moment of human error. Organizations that have prioritized GDPR readiness have at least taken steps to reduce the likelihood of an incident, according to Cisco’s findings.

The organization’s researchers noted that the probability of a GDPR-compliant organization suffering a data breach sat at 74%. That’s not bad when compared to less-ready businesses. Companies less than a year out have an 80% probability of suffering a breach while those more than a year out have an 89% chance.

Given the harder-to-quantify risks of a data breach, such as a loss of customer confidence, any chance to reduce the likelihood of an attack seems worth looking into.

Next Steps: Achieving GDPR compliance and true data security

If your organization is still working toward GDPR compliance, or is unsure of how to get there, don’t worry, you’re not alone. Given the risks – fines for non-compliance with GDPR can go as high as 4% of annual global turnover or $20 million – the sooner you reach that goal, the better. Arraya has the tools and expertise needed to help your organization get in step with GDPR.

Our Cyber Security team can perform a comprehensive GDPR Preparedness Workshop. This two hour engagement will help determine if your company falls under its widening regulatory umbrella, identify regulatory shortfalls, and recommend improvements to boost not only compliance, but cyber security postures as a whole. Visit https://www.arrayasolutions.com//contact-us/ to schedule your session now or to connect with our Cyber Security team.

As always, feel free to leave us a comment on this or any of our blogs through social media. Arraya can be found on LinkedIn, Twitter, and Facebook. Remember to follow us to stay up to date on our industry insights and unique IT learning opportunities.

February 13, 2019 by Arraya Insights

Modern data centers are incredibly complex organisms and they seem to only get more intricate with each passing year, quarter, or even day. Overseeing these vmc on aws, vmware HCXenvironments – both from a hands-on and strategic perspective – requires a tremendous investment of time, energy and resources. With that in mind, we decided to use this, the third post in our ongoing investigation of VMware Cloud (VMC) on AWS, to look at HCX, a component technology designed to make one aspect of data center management less complicated.

HCX (which stands for Hybrid Cloud Extension) is a networking solution that can streamline and optimize workload migrations between on-premises and VMC-based environments. Previously, HCX existed as a bolt-on technology separate from VMC on AWS. However, VMware made the decision during the first part of last year to make it a core part of that solution. How does it do that, exactly? Let’s go in for a closer look.

Simplifying data center moves with VMware HCX

VMware HCX technology:

  • delivers a single pane of glass management experience for Data Center admins.
    By way of this connection, admins leverage the same vSphere (for those using versions 5.0 and up) client they would use to manage on prem workloads. This ability to seamlessly extend onsite data centers into the cloud eliminates the burden for admins to learn an entirely new interface or set of skills in order to stay on top of hybrid environments.
  • mitigates the cost of data center refreshes or expansions.
    Workloads can move back and forth between on prem and the cloud using HCX without modification. Moving workloads offsite can allow organizations to bypass refreshing or expanding their data center footprints and instead embrace a hybrid structure.
  • provides a stepping stone on the way to the cloud.
    As mentioned above, workloads can seamlessly transition between life on prem and life in VMC through HCX. This capability can also prove valuable to organizations looking to move into the cloud with greater confidence. Workloads can be pilot-tested in the cloud to weed out unpleasant surprises that could occur during large scale migrations.
  • keeps productivity levels high across the organization.
    Migrations can leave mission critical applications and data unavailable, preventing end users from performing at a high level until their conclusion. With VMware HCX, migrations can be executed in just a few clicks, with zero downtime. This ensures users won’t miss a beat whether workflows exist in the cloud or on prem.
  • protects organizational workloads even in worst case scenarios.
    We’ve already mentioned how VMC on AWS supports high-performing disaster recovery However, HCX should be a part of this conversation, as well. HCX can replicate data to AWS and remove the need to reconfigure IPs. In a recovery scenario, this lets organizations bounce back to a state of normalcy more quickly.
  • combines efficiency and security for faster, safer migration experiences.
    WAN optimization, data deduplication, and more are all baked right in to HCX. Furthermore, this bridge connecting onsite workloads and VMC on AWS sports powerful encryption. Having these inherent features in HCX lets admins execute more efficient, secure migrations without having to purchase and deploy additional appliances or solutions.

Next steps: Get the full VMC on AWS experience

Want to learn more about how VMware HCX is simplifying the modern data center migrations? What about the other ways in which VMC on AWS can benefit your organization? Don’t wait until our next blog! Our Data Center team is ready to answer these questions and more. Start a conversation with them today by visiting: https://www.arrayasolutions.com//contact-us/.

Feel free to leave us a comment on these or any of our blogs through social media. Arraya can be found on LinkedIn, Twitter, and Facebook. Share your thoughts then follow us to stay updated on our industry insights and unique IT learning opportunities.

February 11, 2019 by Arraya Insights

Arraya Insights Radio

Episode 13: Predicting 2019 in Technology: Fact vs. Fiction

Arraya Insights Radio is back for another year! In this episode, Tom Clerici (Practice Director, Cyber Security) and Doug Guth (Practice Director, Infrastructure Solutions & IoT) go head-to-head over the year ahead in technology, debating which industry trends will define 2019.

Host: Thomas York (Senior Director, IT Operations)

Guests: Tom Clerici (Practice Director, Cyber Security) and Doug Guth (Practice Director, Infrastructure Solutions & IoT)

Further Reading:

  • DNS Hijacking Prompts Historic CISA Emergency Directive, by Arraya Insights
  • Video: Looking Back on 2018’s Most Impactful Tech Trends, by Arraya Insights
  • Dell EMC Support Update: Circle These Dates on Your Calendar, by Arraya Insights
  • Cisco Publishes 3 High Impact & Above Vulnerabilities: What to Do, by Arraya Insights
  • 6 Security Lessons Learned from Marriott’s Massive Data Breach, by Arraya Insights

Theme Music: “I Don’t Remember (Yesterday)” by Hygh Risque

January 30, 2019 by Arraya Insights

Remote work is rapidly becoming the norm for many businesses. It’s a trend that doesn’t seem to be going anywhere, either. In fact, a recent article on Forbes predicts it Cisco Room Kit Mini huddle spacewill likely only grow stronger in 2019 and beyond, as younger workers continue to establish their place in the workforce. Furthermore, research conducted by Intermedia found one in four workers would turn down a job if it didn’t include the ability to work remotely. What effect could this demand for flexibility have on traditional office staples, like say, the conference room? Could 2019 spell the end of the conference room as we know it? We doubt it. The need for traditional conference spaces will likely never vanish. However, for many organizations, the trend toward working remotely may necessitate a change in the way they think about at least some of the space they’ve set aside for collaboration.

Huddle spaces are conference rooms, but on a smaller scale. They can house meetings of half a dozen or fewer in-person participants. Then, any remaining members of the guest lists, are able to dial in from wherever they happen to be. As the number of remote meeting attendees goes up, organizations should consider creating a few huddle spaces.

Beyond layout concerns, the questions remains, what technology is needed to make sure huddle spaces achieve full value?

Evolving employee collaboration alongside their work styles

Late last year, Cisco released the Webex Room Kit Mini, a solution built for the specific purpose of transforming huddle spaces into intelligent, secure conference rooms. Tying together everything needed to keep an increasingly dispersed workforce connected – codec, microphones, camera, and more – the Webex Room Kit Mini could fit perfectly within the huddle space use case and more.

Let’s quickly rundown the capabilities that set the Webex Room Kit Mini apart.

  • Built for the huddle space environment. The Webex Room Kit Mini camera sports a 120-degree field of vision. That’s perfect for recording a small group of tightly-clustered people.
  • Brings big-time intelligence to small meetings. Its camera automatically detects meeting participants, delivering a consistent framing experience so no one gets cut out of the discussion.
  • Enables sharing without wires. Space is at a premium in many of today’s workplaces, making it hard to avoid the cords and cables that bring most conferencing solutions to life. The Webex Room Kit Mini only needs two – power and HDMI – as it permits both wired and wireless sharing.
  • Supports premium content sharing. Webex Room Kit Mini supports full 4K content sharing, ensuring those in the huddle don’t miss a thing.

Next Steps: Decide if Webex Room Kit Mini right for your huddle space

Want to learn more about Cisco’s Webex Room Kit Mini? Our team of collaboration experts is available to dive deeper into the above points and more to help you determine if it’s the right solution for your organization’s needs. Visit https://www.arrayasolutions.com//contact-us/ to start a conversation with them today.

As always, feel free to leave us a comment on this or any of our blogs through social media. Arraya can be found on LinkedIn, Twitter, and Facebook. Remember to follow us to stay up to date on our industry insights and unique IT learning opportunities.

January 28, 2019 by Arraya Insights

An ongoing malicious campaign targeting federal government websites prompted a historic response from the Cybersecurity and Infrastructure Security Agency (CISA). CISA DNA Hijacking emergency directiveThe agency, which operates under the banner of the Department of Homeland Security, issued its first ever emergency directive last week in an attempt to thwart a series of DNS hijacking attacks. Now, granted, at-risk executive branch agencies are the intended target of this directive. However, the threat vector it documents is something all organizations should be aware of – as are the defensive schemes.

CISA’s instructions come as evidence mounts of a persistent operation to hijack government accounts that manage agency website DNS records. CISA dismissed the techniques behind the campaign as not “especially innovative,” but that didn’t stop the agency from taking further action. DNS security is an all-too-common blind spot for organizations – both inside and outside the federal government. Failure to properly defend this weak point could allow criminals to intercept legitimate traffic, knock services offline, help themselves to sensitive data, and more.

So, what does CISA recommend federal agencies – and really any organization – do to prevent DNS hijacking? The emergency directive included four best practices gleaned from CISA’s own expertise as well as from the experience of other technology and security professionals, from the public and private sectors.

4 CISA-approved DNA defense best practices

Agencies – and, again, really all organizations – should:

  • Verify current DNS records to ensure traffic redirects as intended and not to an unknown third party
  • Update the passwords for any DNS management account to cut off the access of any unauthorized outsiders
  • Add multi-factor authentication to any DNS management accounts to provide an additional layer of security for this often-overlooked access point
  • Keep an eye on Certificate Transparency logs for suspicious activity, including phantom certificates

Defend your environment without further taxing your team

Despite its importance, there is a reason DNS security falls by the wayside for many organizations and even government agencies. Today’s technology teams are overwhelmed as it as and adding more manual tasks, such as regularly parsing DNS records and Certificate Transparency logs, will only worsen the matter. Furthermore, these routine tasks are often the first ones set aside in favor of higher value projects or more pressing fires.

One tool Arraya recommends for ensuring DNS security without adding more work to IT’s plate is Cisco Umbrella. Organizations are able to forward their DNS logs to Umbrella for analysis. If Umbrella identifies a change that would route DNS requests to high risk domains, it can block the move. Utilizing a solution such as Umbrella, backed by CISA’s best practices listed above, is an excellent way to transform DNS security from a weak point to a strength.

Want to learn more about Cisco Umbrella, DNS security and building a secure technology environment? Reach out to our team of cyber security experts now by visiting: https://www.arrayasolutions.com//contact-us/.

Also, let us know what you think of this post! Leave us any comments or questions through our social media presence. Arraya can be found on LinkedIn, Twitter, and Facebook. Then, follow us to keep up with our take on industry news and gain access to exclusive learning opportunities.

January 9, 2019 by Arraya Insights

Organizations from across the business spectrum are flocking to robotic process automation (RPA). And they’re doing so with good reason. In a blog post from earlier thisRPA mistakes year, our subject matter experts detailed how RPA can help keep ballooning technology costs in check, while also offering anywhere from a 60-70% return on top of initial investments. As far as value propositions go, those are pretty eye-catching. The thing is, there is a downside to all of that RPA love – a 30-50% failure rate on initial projects.

Risks aside, businesses continue to look to RPA as a way to solve problems and increase efficiency. However, here at Arraya, we don’t want to just connect organizations and technology and call it a day. Instead, we want to make sure our partners extract immediate and lasting value from the solutions we recommend and implement.

In that spirit, we sat down with our internal experts to put together a list of the most common RPA mistakes they see businesses make – and what to do to get it right.

5 too-common RPA mistakes

Mistake #1: Ignoring the process itself. RPA can do a lot of things. One thing it can’t do? Fix a broken process. In fact, applying RPA to a bad process can actually make things worse as all RPA will do is speed things up, resulting in more of whatever made the process faulty to begin with. RPA projects must begin with an in-depth analysis of the process in question. Attention must be given to how it currently works, how it was intended to work and what needs to be done to bridge the gap between the two. Only after the value of the process has been verified, should the topic of automation come up.

Mistake #2: Biting off too much. RPA projects are complex. Organizations that try to automate too much too soon often see their projects spiral way out of control – or end up abandoned. This is a situation where having someone who has been there before – be it an in-house resource or a partner – can make all the difference. Right away, this resource can provide value by helping to set achievable benchmarks and milestones. Also, he or she will know where the risks lie and how to avoid them.

Mistake #3: Overlooking centralized orchestration. Automation needs to be ready to grow and evolve alongside the organization it supports. However, for too many who adopt it, automation ends up becoming restrictive and rigid. Automation best practices call for a central hub from which admins are able to oversee the various bots in their environment. Without this hub, growth or direction changes may require each bot to be reprogrammed or redeployed individually.

Mistake #4: Putting up implementation silos. More and more, technology is becoming less of an “IT concern” and more of an “everybody concern.” While this is absolutely true of RPA, many organizations miss that fact, instead plowing ahead with the false idea that automation is one for the techies. Any move toward automation must be an interdisciplinary project. Representatives from any impacted department should be brought in to early discussions in order to share their unique expertise into the inner-workings of a targeted process. This will help reduce unpleasant surprises later on.

Mistake #5: Assuming RPA equals hands-free. As mentioned earlier, RPA can require some hands-on attention, either as a business or as objectives evolve. Routine maintenance is, of course, also a concern. Some businesses have a habit of leaving RPA to its own devices post-rollout. In the event that something goes wrong, this can lead to delayed responses. Even after someone has identified the problem, correcting it can fall in between the cracks of an org chart. Instead, make it clear early on just where responsibility for RPA lies and ensure maintaining it becomes part of the routine.

Next steps: How to succeed with RPA

Want to learn more about how to succeed with RPA? Our team of experts is here to help. They can provide the strategic knowledge and hands-on expertise necessary to ensure RPA projects payoff quickly and for years to come. Reach out to our team now by visiting: https://www.arrayasolutions.com//contact-us/.

Let us know what you think of this post! Leave us any comments or questions on our social media pages. We can be found on LinkedIn, Twitter, and Facebook. Then, follow us so you can keep up with our take on industry news and gain access to exclusive learning opportunities.

January 2, 2019 by Arraya Insights

This is the final post in our ongoing, deep dive series on the subject of segmentation. Each post has been written by a member of Arraya’s technical or tactical teams, focusing on a specific piece of this extremely broad, highly transformational, topic. SD-WAN segmentation

Does your network need “more” segmentation? The answer is most likely “yes.” Even if you have access to most other corporate assets, executive compensation plans are usually not available for just anyone to see. But, what protection are you providing for your company’s data? Camera / video data, physical security and building access systems, all house employee personal information. These systems can and do become compromised. They are some of the last devices to be moved to the cloud and its promise of protection. With some basic filtering and segmentation, a considerable amount of risk can be mitigated. We can take this process and replicate it over a Cisco-backed wide area network.  While we often have strong policies and procedures at corporate headquarters, remote locations often don’t have the same budget or mindset. These remote locations often generate a significant – and overlooked – risk.

Software defined or “SD” WAN doesn’t bring us the ability to filter corporate sites. Service providers have used segmentation and network filtering for as long as they have been around. This is no simple feat. There is an entire CCIE discipline dedicated to the complexity of popping labels, VRF leaking, L3VPN and carrier based Ethernet configuration. By choosing the right SD-WAN provider, you can get some of these features without the need for your own team of CCIEs.

Architects build today’s networks using templates and address pools instead of console cables and notepads. This allows us to keep our deployments, security, and design consistent. In case of a lost or compromised device, we can quickly revoke its certificate(s) and remove the device from the network.

There are essentially three key segmentation building blocks.

Building Block #1: Classification

This is the first stage of segmentation.  On the WAN edge, admins traditionally did this with layer 4 access-list matching on an IP or port. This evolved to NBAR, Cisco’s technology which can work to identify traffic dynamically instead of using static lists of ports. The current Cisco NBAR2 technology can recognize over a thousand applications.  Protocol packs apply incremental “hitless” updates identifying today’s plain text and encrypted applications with no need for decryption.

Recently, new NBAR “groups” and “attributes” have made network admins’ lives easier. A high level list of “traffic classes,” such as VOIP-telephony, real-time-interactive, network-control and bulk data, are created and updated by default. The network administrator can additionally apply an attribute called “business-relevance.” This helps mark down or reclassify applications like Apple FaceTime, which identifies itself as real-time traffic but is most likely not relevant for work time at your job.

Using these classification abilities, we can match traffic for guest, contractor, and employees and then “tag” the traffic for appropriate filtering. Depending on the environment, this may be Cisco SGT, VRF or a DSCP value. This will come up again further down the road when enforcing filtering.

Building Block #2: Filtering

The next step in the process is to determine what we want to filter and segment. Easy use cases are for guests and unmanaged systems. Filter or segment anything your organization can’t manage on network. This isn’t always easy or even possible. By filtering traffic from unprotected locations, we can reduce risk and take more of a “Whitelist” approach and explicitly permit traffic that is required.

Just about every SD WAN solution gives you the ability to segment and separate traffic out of the box. “Leaking” and filtering traffic is possible with most SD WAN solutions. However, many organizations prefer to filter this traffic through traditional firewalls. This keeps filtering of security zones consistent across an organization specifically for those with existing security standards and approved methods or procedures.

Building Block #3: Validation / Reporting

The final piece of any segmentation project is validation and reporting. IT should document, validate and audit all high level policies. Adding or editing security zones necessitates additional testing and validation to ensure conformance.

To learn more about segmentation and its role in today’s IT landscape, reach out to our team of experts by visiting: https://www.arrayasolutions.com//contact-us/.  

Primary Sidebar

Back to Top
Arraya Solutions logo

We combine technological expertise and personal service to educate and empower our customers to solve their individual IT challenges.

518 Township Line Road
Suite 250, Blue Bell, PA 19422

p: (866) 229-6234     f: (610) 684-8655
e: info@arrayasolutions.com

  • Careers
  • Privacy Policy
  • Contact Us

© 2025 Arraya Solutions. All rights reserved.

Facebook Twitter YouTube LinkedIn
Manage Cookie Consent
We use cookies to enhance your experience. By selecting “Accept,” you agree to our cookie policy.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
View preferences
{title} {title} {title}