SQL Connection to Cloud for Historian to appease IT

mad4x4

Member
Join Date
Mar 2009
Location
ST CYrus
Posts
363
I have an PLC and HMI & SQL Server sitting on an isolated network (on Premis) and I would like to securely connect this to AZURE Cloud with out exposing the whole control system to internet.


I have with my demo system and got it working but my SQL server had to had to be plugged into the internet router. THis is using Transactional SQL replication from MSSQL server to a CLOUD SLQ server (works well) .BUT! Plugging directly into internet in the production system is a No No from Company IT )as machine os, patching, antivirus, etc doesn't meet company spec and machine is owned by project not IT yada yada yada).

Due to a remote location the only internet comes for IT, we can NOT "3G route it" or use other hick solutions.

What tools/Network'y architecture do I need to put in (gateways, firewalls, tunnels, vpns, 2nd nics ect) to convince IT have a valid case and to get IT to take this seriously.

Alternatively I can jump up and down and make a lot of noise with IT we will lose control of everything and they will F### it up as usual and 3 years down the line we will have to start again.

My initial thoughs is MSQL - Replicate to a LocalGateway MSQL machine that then that machine replicates to Cloud via internet. This gives protection to the production SQL server, But this seems complex and needs loads of licenses and hardware.
 
MSSQL is Microsoft SQL server?


MSQL is mSQL by Hughes (https://en.wikipedia.org/wiki/MSQL), or do you mean MySQL?


I am assuming that in your demo system, your MSSQL server was the Publisher and connected to the internet, and the Cloud MSSQL server was the subscriber, and the Cloud MSSQL initiated the Transactional Replication, i.e. the Cloud MSSQL was pulling the data. Is that correct?






I would think the issue is that the data need to be pushed to the Cloud SQL, not pulled. That way there can be a gateway/NAT router with the isolated network as the NAT-router's LAN and the internet, or maybe another company NAT router LAN, as the NAT-router's WAN. So the isolated network can initiate connections to the internet and the NAT handles the details, but the Internet cannot initiate any connections to the isolated network.


The Transactional Replication web page at MS mentions a Distribution Agent that can do push subscriptions, as opposed to the Subscriber for pull subscriptions. Maybe your demo used the latter?
 
MSSQL is Microsoft SQL server?


MSQL is mSQL by Hughes (https://en.wikipedia.org/wiki/MSQL), or do you mean MySQL?


I am assuming that in your demo system, your MSSQL server was the Publisher and connected to the internet, and the Cloud MSSQL server was the subscriber, and the Cloud MSSQL initiated the Transactional Replication, i.e. the Cloud MSSQL was pulling the data. Is that correct?






I would think the issue is that the data need to be pushed to the Cloud SQL, not pulled. That way there can be a gateway/NAT router with the isolated network as the NAT-router's LAN and the internet, or maybe another company NAT router LAN, as the NAT-router's WAN. So the isolated network can initiate connections to the internet and the NAT handles the details, but the Internet cannot initiate any connections to the isolated network.


The Transactional Replication web page at MS mentions a Distribution Agent that can do push subscriptions, as opposed to the Subscriber for pull subscriptions. Maybe your demo used the latter?



MSQL is microsoft 2012 SQL Server (sp4)


TO answer had the demo set up with the OnPremis Server set up as the publisher and the Microsoft Azure set up as the Subscriber. Transactional Replication was configured at the On Premis End not in cloud, all you do is map the cloud as a subcribers. I will have a look at the Link you provided and see if I can set this up the other way.


Does the Publisher Push infromation out...? Or Does a subscriber pull an update? THe SQL Agent runs on the SQL Server, with is the Publisher and this triggers the SYNC so I would think that the publisher pushes?...
 
... So I would think that the publisher pushes?...

Hmmm, sounds like putting the network of the local mssql server behind a NAT should do the trick, but it depends on your IT department's skill and comfort level. It may be possible to set that NAT router up to allow only that one connection as well, to make it even more secure, If your IT folks don't understand that tell them to propose and fund a solution.
 
Working in IT but coming from a Process background -

Depending on your dataflow, I would put the SQL server on the Enterprise(IT) side of things, patch it etc, use local buffering(most SCADAs handle this fine) in case of a network drop, and put a firewall/1to1NAT/something between the Enterprise side and the local control network.
Only allow communications from the SCADA to the SQL.
 
I have to ask,
why would you want to put the plant data onto the cloud?
if the account was hacked, data could be corrupted, erased, all sorts of things.
call me old fashioned and paranoid, but who ever owns the cloud you are on, can simply flip a switch and turn it off, start charging for the service. what happened if communications are interrupted for several hours?
james
 
You are correct, to do this right requires a DMZ and that duplicates hardware and expense... and if you are using MSSQL, it is licensed by the core so ... lots of expense.
There are many reference documents to set this up.

MySQL is fairly compatible, good community support and free so there is that option, I don't know anything about MSQL.

When you reach outside your intranet you are in the wild wild west so your best bet is to leave that to the folks who have to deal with it all the time. I get at least 6 notifications every day from security orgs like CERT, DHS, MS-ISAC with new vulnerabilities and there is no way I can keep up and still get anything else done.

And I agree the corporate IT dept is almost always a colossal pain to deal with and slower than molasses to change, but express your concerns to management and leave it to them to deal with.
 
I have to ask,
why would you want to put the plant data onto the cloud?
if the account was hacked, data could be corrupted, erased, all sorts of things.
call me old fashioned and paranoid, but who ever owns the cloud you are on, can simply flip a switch and turn it off, start charging for the service. what happened if communications are interrupted for several hours?
james

Answer is simple the Process Historian is offshore on a Ship that has limited access and slow connection via Satalites, the Analytics will be done onshore. So Replication is up to Cloud or to onshore for analysis. IT would provide the onshore server (most likely a AZURE Server ) as that is the way they roll.
 
You are correct, to do this right requires a DMZ and that duplicates hardware and expense... ....

No it does not?

DMZ is basically just a "separating space" between dirty network like the office network and the production network where you can connect from the side of the dirty network and from side of the production network but those two cannot be connected through. Then the SQL server should sit there, not in the production network anyway.
 
Let's keep it consistent, it's MSSQL not MSQL. Matter of fact, MSQL is mini SQL from Hughes Technologies.

Likewise, AdvancedHMI should be AAHMI, not AHMI:=)
 
Let's keep it consistent, it's MSSQL not MSQL. Matter of fact, MSQL is mini SQL from Hughes Technologies.

Likewise, AdvancedHMI should be AAHMI, not AHMI:=)




Let's keep it clear, in the first post the OP referred to it as MSSQL, obviously MicroSoft SQL server, but then there were a few times they referred to MSQL; there was a query about it and the OP cleared it up, "MSQL" was a typo where MSSQL was meant.


So at this point, only MSSQL was considered by the OP; the expense made an on-board instance unattractive, and MySQL has been suggested as an alternative. If the DB is local, I would go with SQLite: freeware; simpler to set up than MSSQL or MySQL; simpler to back up (one file); a reasonable SQL dialect. The disadvantage would be the transfer of data to shore: MSSQL and MSSQL have replication tools that would only send incremental data; copying the entire SQLite DB file at every change might not be practical over a slow link. So custom replication software would need to be written, although it would not be difficult as there could be a "replicated" column in every table that could be used to filter which data were already on-shore and which increments data should be sent to shore.



Then again, if the data are being staged on-board for eventual transfer to land-based HQ, then any format would work e.g. Python pickle files.



The attraction to MSSQL is only having to configure existing software, so weiging the cost of MSSQL licensing vs. the time invested to set up something else.
 

Similar Topics

Hi, We are doing a project in FactoryTalk View with Database connected to SQL via Connections available within the FactoryTalk View Project. Is...
Replies
1
Views
1,585
Hi All, I'm having no luck with some development I've just started. I'm deploying a new roll-out of a factory Talk View Distributed system...
Replies
2
Views
2,675
Hi All, I've managed to break my SQL server (v17) connection on an FT SE project. The first version of project I had on the PC worked just fine...
Replies
1
Views
2,409
Goodevening all... I have a Redlion PTV and Graphite with built in webserver... I can create the simple default.htm page... drag in tags to...
Replies
1
Views
1,907
I'm unable to get tag values and SQL database table values in VBA editor. Anybody please help me with this problem.
Replies
1
Views
1,908
Back
Top Bottom