TheSaffaGeek

My ramblings about all things technical


1 Comment

Dell IQT

*Disclaimer* Whilst I am a Dell EMC employee this is my own take around the announcement and technologies, this posting has not been reviewed nor requested by my employers and whilst I plan to give as factual information as possible nothing mentioned below can be deemed fully verified.

 

On Wednesday this week Dell technologies announced their new IoT Strategy, Division and Solutions to Accelerate Customer Adoption. There were a number of announcements from the event and for me this is super exciting as I have always had a very keen interest in the whole AI and IoT eco-system and what it enables and to be part of a company that is investing and looking to do so much within the arena truly excites me and so I wanted to give my own take of the announcements and possibly as more bits come out I might do a series of postings around the various solutions. Also if I am lucky enough to play with some of the new technologies being announced then I might even try do some NDA permitting pieces in a series about those as well.

Being someone who works with customers every day as part of my job to help them digitally transform and move more towards cloud computing and a centralized model of  delivering IT services I have seen amazing success stories and benefits from this but now with the age of not just smart phones but smart cars, smart watches,heart monitors and a plethora of smart sensors there has become a requirement for a distributed core that allows focused real-time processing of information as these devices require instant responses rather than having to wait for a response from a centralised cloud infrastructure (if companies are even at this fairly advanced stage) that may be seconds away.

Michael Dell,  chairman and chief executive officer of Dell Technologies linked to this change where he stated : “IoT is fundamentally changing how we live, how organisations operate and how the world works.Dell Technologies is leading the way for our customers with a new distributed computing architecture that brings IoT and artificial intelligence together in one, interdependent ecosystem from the edge to the core to the cloud. The implications for our global society will be nothing short of profound.” Due to this requirement from customers Dell Technologies announced the new Dell Technologies IoT Division which will be led by someone very well-known in the virtualisation arena VMware Executive Vice President and CTO Ray O’Farrell, and is chartered with orchestrating the development of IoT products and services across the Dell Technologies family. Ray set out that Dell Technologies is investing $1B in new IoT products, solutions, labs, partner program and ecosystem.

You may or may not be aware but Dell already provides Edge Gateways which are made to function at the extremes from being able to offer an operating temperature range of -30°C to 70°C and in a hardened enclosure that protects from dust, oil and weather elements, while the chassis intrusion switch sends an alert in case of unauthorized access to name two non technical features. These gateways can be managed by VMware Pulse IoT Control Centers. Lets have a deeper look at VMware Pulse IoT Control Centers:

 

VMware Pulse IoT Control Centers

VMware define the offering as : “ a secure, enterprise grade, end to end IoT infrastructure management solution which allows OT and IT to have complete control over their IoT use cases, from the edge all the way to the cloud. It helps companies to onboard, manage, monitor and secure all “things” and infrastructure for IoT.”  Additional information around VMware Pulse seems to be very limited at the moment even for VMware partners but from what is covered in the datasheet it uses a Liota open source SDK to allow enterprises to create edge system data orchestration applications, the ability to integrate with NSX for network segment mapping (I wonder if this is using NSX-T or NSX-Cloud) and console integration with VMware Identity Manager for SSO.

Project Nautilus

The next announcement was the announcement of Dell EMC’s unified data stack for IoT called Project Nautilus which provides an extensible collection of elastic storage micro-services

 

image

Manuvir Das, SVP and GM of Unstructured Storage Division at Dell EMC gave a brilliant demo of the offering at the IQT announcement showing how Project Nautilus ingests millions of streams of data in real-time and analyzes the data in these streams in real-time as well.

Boomi

Another offering covered during the announcement that has been out for a bit is Dell Boomi, Dell Technologies enterprise integration platform as a service  offering. I have only started learning about Boomi so I plan to write a separate posting about the offering and link to it here but there is a brilliant demo already available around the offering here

Other announcements

At the event, Dell Technologies also revealed new IoT services initiatives which include:

  • IoT Vision Workshop – Identifies and prioritises high-value business use cases for IoT data – essentially how and where to deploy IoT analytics that drive business.
  • IoT Technology Advisory – Develops the overall IoT architecture and road-map for implementation.In addition, with the core focus on technology and services, Dell Technologies’ strategy is to grow the IoT footprint via a strong partner program and ecosystem.Dell’s award-winning IoT Solutions Partner Program is a carefully curated, multi-tiered program consisting of 90+ partners from enterprises like Microsoft Intel and SAP to start-ups like Evolve, FogHorn and Zingbox, who  also presented at the event.
  • Investments in IoT Future through Dell Technologies Capital
    Dell Technologies Capital, the venture arm of Dell Technologies, is partnering closely with the IoT division, providing industry insight and relationships to support its strategic agenda. Through its investments in promising startups and founders, Dell Technologies Capital provides a valuable link to the external innovation ecosystem, effectively accelerating the development and deployment of new IoT, AI and ML technologies and solutions.

As I stated at the beginning I am hoping to be able to write a series of postings around these offerings as I learn more about them. I’m personally really excited about the offerings and the investments being made and hopefully I can learn them and also share the knowledge around them on this blog.

Gregg

Advertisements


Leave a comment

PowerEdge 6950 shows 112GB of RAM even though 128GB is installed

 

For the past few days we’ve had the above problem where we upgraded one of our PowerEdge 6950 server to 128Gb of RAM but once we booted it it only showed 112GB of RAM. My colleague James Voll and I tested to try work out which RAM bank may have been causing the issue and found one where if we removed the two RAM sticks it still only showed 112GB so we contacted Dell and got an engineer out who replaced the motherboard for us but still after the replacement it wasn’t showing the full amount. in actual fact due to the motherboard coming with the basic BIOS update it only showed 12GB!! So we updated the BIOS to the latest revision and it was still showing 112GB. I logged into the BIOS and went to the memory settings for the server and noticed “Server Memory Test” was enabled so decided to disable it and reboot the server to see if possibly it would change something and remarkably it seems it did as now the server showed 128GB of RAM and booted twenty times faster through the post screen. I’m not if it’s a bug or because the 6950’s don’t normally have that amount of RAM in them but disabling the Server Memory Test feature seems to have given back our missing 16GB of RAM Smile

Hope this saves someone the time and effort it took for us to work it out

Gregg


Leave a comment

“Warning: The current memory configuration is not optimal.”

 

Today I had the problem where after adding new memory to one of our Dell 6950 servers even though we followed the correct steps of putting the new 8GB memory cards in as pairs after booting the server the server showed 82GB instead of the installed 96GB.

 image

We had 4GB cards in already and if I checked under Hardware Status via Virtual Centre (due this machine being an ESX host) it showed all the DIMMs correctly and added up correctly. I searched all over the web for an answer but only when I rebooted the server did it give me an error that helped me source the answer to my problem. The following error showed “ Warning: The current memory configuration is not optimal.”  and told me to see the owners manual as shown below.

image

I searched for this and came across Steve Jenkins’ posting on the exact error and the response he got from Dan Coulter, an Enterprise Expert Center Server Support Analyst at Dell. So as to direct traffic accordingly to Steve’s posting rather than posting the answer here I’ve pasted the link to his posting below

http://stevejenkins.com/blog/2009/12/dell-warning-the-current-memory-configuration-is-not-optimal/

Seeing as we have more memory arriving soon to match the newly added 8GB cards I’ll just wait for the new cards to fix the problem but a big thanks to Steve and Dan for the answer to the problem.

Gregg


Leave a comment

Dell Openmanage Installation

 

A few weeks ago I had to install OpenManage on some of my newer dell server machines and since some of the settings have changed since I last used it I thought I would write up a list of the steps required for our teams wiki site and write up a blog posting of the the steps for anyone who hasn’t done it before.So belatedly here are the steps and cool new tricks some of my friends on twitter showed me.

Firstly the standard steps:

  1. Download the tar file from the ftp site or the support site 
  2. Copy the file using winscp(this is what i use at least) to /tmp/openmanage on the server
  3. Log into the box either via winscp /via putty or onto the console directly and type in:

    cd /tmp/openmanage
    tar -zxvf OM_X.X.0_ManNode_A01.tar.gz (OM_6.2.0_ManNode_A00.tar currently)

  4. Once the files have unpacked type in:
    cd linux/supportscripts/
    ./srvadmin-install.sh -x
      (-x is for express install and installs everything but if you only want to install specific features the commands you can also use are -d -w -r –s )

    -d = Dell Agent
    -w = web interface
    -r = DRAC services
    -s = storage management

     
  5. Once the files have unpacked and installed type:
    srvadmin-services.sh start
  6. When the various component services have finished starting type in: 
    cd /tmp
    rm -rf Openmanage
  7. To allow the Openmanage agent to function the following firewall commands need to be run, these open the firewall ports required: 
    esxcfg-firewall -o 1311,tcp,in,OpenManageRequest

 

While asking a few friends on twitter if the latest Openmanage worked well in their environments Arne Fokkema @afokkema of ict-freak-nl fame pointed to the automated scripting way of doing it written up by Scott Hanson @dellservergeek . As you may know if you’ve read some of my previous blog posting I’m trying to learn how to script more and more of my daily tasks to firstly build my powershell and scripting knowledge and skills as well as making my daily job easier. The script is really simple and is one I’m planning to test in my lab environment very soon. At the bottom of the script though was a comment by one of my powershell idols in Alan Renouf @alanrenouf. He had changed a few of the snmpd commands and so I got a hold of him via twitter and classic him he mailed me the script he spoke of. Only after this did I notice he wrote up a blog post about it,which is exactly what he sent me.

Thanks to all who replied to my twitter messages and hopefully I can get Alan/Scott’s scripts into my automated server deployments in the very near future.

Gregg Robertson

VMW_09Q3_LGO_VMwareCertifiedProfessional_K

MCSES(rgb)MCTS(rgb)_1079_1080_1078image