Online: | |
Visits: | |
Stories: |
Here it is:
The UST (US Treasury) does not have the manpower and other resources to make the changes needed to roll this out internationally and do the RV. There are many different vendors involved.
They put this out for bid. First Bank of America (BOA) as the lead took the bait but realized it could not supply the resources needed to do the job. They simply did not have the level of technical expertise needed in their limited IT department and did not feel they could do it in the time frame needed and with the budget offered.
So Wells Fargo came into the picture since it had a larger IT shop and worked with an excellent, skilled IT outsourcing group, So they showed they could perform and the contract was awarded to them.
One of the problems they are now aware of it that they did not hire a specialized Quality Assurance (QA) test group but are now finding that this is needed and should have been done. But they had to go back for more money since their orginal contract did not include funding for these resources.
They have since hired some QA specialists and a QA manager. They also took for granted the changes needed since the specs handed off to them were incomplete. The specialized test group is finding the developers went right to Beta test without doing much Unit testing (I think I go it right? ).
Are developers the coders? Just curious..anyone? They are now moving on in the right direction but there is more testing to do. They are working late hours and weekends.
So WF has realized this was a larger undertaking than expected since they could not get the changed rates of 190+ currencies to integrate down to the banks and FOREX all at once with any reliability and consistency.
So they backed off and met with the company that controls the FOREX system (which they would not tell me who and I really don’t care) and re-strategized this mess and how they would proceed going forward.
They also met with some high level officials in the UST and IMF and decided to integrate this with CIX. you know this story already so I will not repeat it. Go read some of my last posts. (This in my opinion only added yet another layer of complexity but they said it didn’t and actually simplifies the process, but what do I know about these things) .
They again received more funding. This has been going on now for the last 3 weeks. This tells me proof this should have been done at least a month ago maybe a couple months ago. So please don’t anyone tell me these techincal fixes are just excuses, a scam and this is false intel just to pacify us.
So what is the current status of the system today?
As part of my career I have been in project management so I can relate to what is happening here. So in summary the project is failing, they are over budget and very late on their schedule. These are their issues. This was a much bigger undertaking than they planned for.
Someone or some group did not do such a great job with the requirement specs, something the contractors relied on too much to be correct. All the typical nightmares associated with a large project that was interpreted as a small undertaking. A lot of invalid assumptions were made, I am sure right from the very beginning.
I know how these situations can get out of hand if you don’t have someone strong to step in and take charge and mange it properly. From what I heard they did not.
We keep hearing its fixed, they move the code to production and it runs. Again it fails and they go back in to fix it. Do they have a test environment, I asked ? Yes they do I ws told.
I was told it is a very limited test environment since the data must flow through many different applications in systems residing on many different platforms on may different enterprises. They must hand off data (the train I used in my analogy) and hope the receiving system can receive it and process it correctly. So where is now the issue.
We are seeing they are down to the last piece in their integration testing effort between systems. They rely on this other team to do their job and do not manage them. So this too is just another issue. Multiple and separate independent vendors working to get this done. You get the point?
Yesterday I believe they made an attempt again to do an integration test with NASDAQ that has no test environment setup do this kind of test. So it took down NASDAQ production as they had to test live in production. This was not intentional.
It was a huge mistake but they had to back it out since it did not work. NASDAQ was down for a couple hours while they worked on it. Imagine that? They are sooo sloppy! Or are they? Not to bash them cause I think this is a very complicated integration going on here. I do wish them good luck ! How about you?
So they are still working on the situation. This was not the RV rollout they hoped for …believe me. It was just another headache in the process of trying to roll this new coding changes out to live production. So relax ! If it was successful we might be at the bank right now and I would not be writing this post…..lol…maybe?
I am now hearing that they are again rethinking and going to begin a new test strategy and compartmentalizing their testing. In other words they are going to conduct integration testing and watch the data flow downstream to each vendor to see how it effects each piece separately.
They have some test tools for this purpose. They have some help now from other companies that stepped in to give advice and some testing resources. I surely hope they can kill this beast soon.
That is all I have. No timeline given to me since this is critical that they do this NOW. So I was told as soon and they can run tests all the way through and it works they will move on to the next step and get it live.
Peace and Luv to ya, mnt goat