Ola Hallengren not truncating T-Logs after backup. Prefer Secondary.

Scenario, you have an availability group with active / passive (read only) set up and backups are set to be preferred on secondary. You install Ola on nodes A (RW) and B (R). A few days later, your T-Logs fill up and things are broken, what’s going on?

To cut to the chase, my problem was that I did not have T-Log backups installed on A and B, I only had them installed on A while I had the Full installed on A and B. So this means each weekend we would still get a full backup. When I checked the listener or node A/B, the full backups looked great. I didn’t realize immediately to check the T-Log job on the B node (secondary). Putting the T-Log backup process on A / B and scheduling them the same fixed it for me.

When I kicked the process off via the listener, it was running this command:

SELECT d.database_name, 
  sys.fn_hadr_backup_is_preferred_replica (d.database_name) AS IsPreferredBackupReplicaNow
FROM sys.availability_databases_cluster d

It basically returns a yes or no saying; Hey guys, I’m the node you want to take the backup on. If the job runs and sees a no, it ignores the database to backup on that node. Meanwhile the other node runs this same command and sees yes instead, it decides to backup the database. (The command is built into Ola scripts.)

A gotcha I ran into was running the job manually from the listener did not help my situation with T-Logs truncating because it was running the backup on the wrong node. So I did a lot of fruitless testing since I was starting the job from the listener. Hopping to the secondary and running the job there gave us T-logs and truncated them effectively.

I did run into a chain problem initially once I discovered the problem and solution. (Where I needed to take a new Full backup to base my Transaction Logs off of.) To determine exactly what the problem was, I ran this command:

SELECT
name,
log_reuse_wait_desc,
recovery_model_desc
FROM sys.databases

Here’s an explanation of the log_reuse_wait_desc column:

Explanation of log_reuse_wait_desc

I would recommend to follow Ola’s suggestions on setup and to read all documentation carefully when doing this for the first time as following his process should keep you problem free.

Brent Ozar also has some good tips and information for setting up Ola’s scripts on Availability Groups.

Parameter validation failed in SSRS

Parameter validation failed. It is not possible to provide valid values for all parameters. (rsParameterError)

I saw this error the other day and it took me a minute to recall how I’ve fixed this in the past.

If you are unfamiliar with SSRS and Visual Studios, it can be a little “unfriendly” with some features. Specifically in this case, taking a parameter that used to be hidden and making it a visual integral part of the report. In my particular circumstance, this report is not testable in visual studio and has to be uploaded to a server in order to validate.

I found a few articles and posts, but none of them helped me in this circumstance:

Microsoft MSDN

Social MSDN Microsoft Forum

Forums ASP

SQL Server Central Forums

The things I tried I really thought that would work:

  • Changing one of my dependent parameters populated from a data set to refresh when it sees fit and not automatically every time.
  • Copying an existing and working parameter that is similar over the old one in the XML and tweaking it slightly.
  • Saving and exiting the report.
  • Deleting the parameter and re-creating the parameter.
  • Scouring the XML for all references to the parameter and making sure it all was kosher.

What did work in the end:

Deleting the report off the report server and re-uploading it. Overwriting the report / uploading and applying the updates to it did not change anything and it gave me a consistent error. After I deleted it from the server and re-uploaded it, the error was gone completely.

I have added in a form for feedback, it would be very helpful if you would take a few minutes to fill this out. My goal is to adjust the content to best help others resolve their problems.

Did this solve your problem?(required)

Featured Stack Exchange answer!

As many people I talk to about the pursuit of knowledge know, I advocate for a minimum of learning three new things per day. So typically I’ll subscribe to SQL newsletters, podcasts, blogs, and any other medium of information I can find to learn as much as possible. Well, I was really surprised when I found out an answer I posted on Stack Exchange was featured in Brent Ozar’s newsletter!

Here below is the newsletter: (I have a feeling the link may break in the future though!)
Ozar Newsletter

So here’s a screenshot of the page:

And here’s the answer I had posted!

I just thought it was really interesting and neat to pop up randomly in a newsletter I’ve been reading for 7+ years and I had to share it.

Where did I put that thing in my SSIS project?

You may inherit a large SSIS project or create / use a large SSIS project someday, or perhaps there’s just a lot of packages. Either way, there’s going to come a day and time where you have to find something in your SSIS packages.

Say you need to remove a column in a table but you’re not sure where that column is referenced in your SSIS packages? Perhaps you are having metadata issues and you need to find every reference to that column? Or maybe you want to see what packages reference what tables, etc.

There are numerous ways to slice and dice this, I created essentially a powershell grep function to cursor through my dtsx packages and find strings referenced.

Here it is!

# Variables you should alter to your environment
$DirectoryToTarget="C:\Git\SSIS\PackageSets1\"
$WordToFind="TableSourceOne"
$PSGrepLog="C:\LogOutputs\PSGrepLog.txt"

# You will probably leave this variable alone. Set to search SSIS. 
$FilterType="*.dtsx"

Clear-Content $PSGrepLog

Get-ChildItem -Path $DirectoryToTarget -Filter $FilterType -Recurse | where { !$_.PSIsContainer } | % { 
    $file = Get-Content $_.FullName
    $containsWord = $file | %{$_ -match $WordToFind}
    If($containsWord -contains $true)
    {
        Add-Content $PSGrepLog $_.FullName
    }
}

Few notes here. It is not perfect, it is not case sensitive, and it will cursor child directories. It can take a few minutes to run depending on your system and the size of the directory. I would not recommend running this on a production server. You should copy the files you need to a nice location away from production and run this.

Once you have a list of packages with references, you can edit each package with a notepad editor or visual studio will let you examine the XML form. From here you can do a CTRL+F function on your keyboard and type in your searching keyword. It will take you to each instance / reference in your package, making it substantially quicker to find those rogue references.

Happy developing!

Edit on 5/4/18.

I have found out that visual studios actually offers the ability to perform the same functionality at a project level. I think my code is still helpful for particular situations, but you should be able to use the menu and find feature to search through all other packages too for your reference. I’m definitely saving my code as it should work across file types with a few minor tweaks.

Snapshot isolation transaction failed in database (Availability Group and SSIS woes)

 

The Setup:

I have a SSIS package and it runs a relatively basic ETL setup. We take data from TableA, move it to TableB. TableB is a heap, we index it in our ETL and drop it afterwards. Now we join TableB to TableC. The reason we don’t go from A to C is because of the data relationship, it can cause a cascade join that creates BILLIONS of reads when we plug in some where predicate values.

Once TableB is joined to TableC, we create our file out of that data and take it away to import-land, where we do important import stuff. This is the match that lit the fire.

The error:

Snapshot isolation transaction failed in database ‘%.*ls’ because the object accessed by the statement has been modified by a DDL statement in another concurrent transaction since the start of this transaction. It is disallowed because the metadata is not versioned. A concurrent update to metadata can lead to inconsistency if mixed with snapshot isolation.

MSDN on the error.

The job ran three times. On the third time, the job succeeded but no rows were imported. The previous two runs elicited the same error.

The where:

I looked at the following timestamps:

– Job failure and SSIS package failure timestamps
– Availability Group log timestamps
– Staging table (Table B) timestamps
– Destination table (Table C) timestamps
– Historical query data timestamps

SSIS failed only milliseconds after an error in the Availability Group logs appeared. Likewise, I had a mismatch of timestamps between the Staging table and Destination table. New data exists in staging but not in the destination table nor the destination file.

I found a link however that gave me an idea!

What if the query taking data from Table B and Table C to a file is what was causing the issues and there is not something running in parallel stopping me? The job had a flow that should not cause issues like this, there is no parallelism in this package. It is acting though as if I am using two connections and one connection is a readonly and the second connection is R/W and I’m holding metadata locks on the table.

I looked to the Availability Group logs and voila! Lock_redo_blocked, creating a SCH_M lock on the Metadata of the table in question.

The why:

Let’s go back to the setup, this time I’m going to make comments between the steps to help illustrate what’s going on.

I have a SSIS package and it runs a relatively basic ETL setup. We take data from TableA, move it to TableB.

This is on the primary node of a synchronous availability group, we have R/W privileges here.

TableB is a heap, we index it in our ETL and drop it afterwards.

Same deal, we are on the R/W connection in the AG.

Now we join TableB to TableC.

Rubber… meet road, road… meet rubber.

The primary AG node is having issues replicating the changes from node A to node B fast enough and is taking out a SCH_M lock on the table which is causing our error. The extract source is based off of the secondary node always, which is in synchronous mode.

Woah, wait! Your package succeeded the third time… but there were no rows? Your staging table must have had no records then.

Not quite, we had rows there. The table we were joining to had matches as well. We are querying the staging table on the secondary and the rows are not present.

Ok… that doesn’t make sense. You are in synchronous commit mode, node A persists as soon as node B does.

True, but there is a problem with this replication as per Microsoft and it is not error proof.

MSDN on why Availability Groups don’t love you anymore.

The client application completes an update on the primary replica successfully, but querying the secondary replica shows that the change is not reflected. This case assumes that your availability has a healthy synchronization state. In most cases, this behavior resolves itself after a few minutes.

 If changes are still not reflected on the secondary replica after a few minutes, there may be a bottleneck in the synchronization work flow. The location of the bottleneck depends on whether the secondary replica is set to synchronous commit or asynchronous commit.

Those are the top paragraphs on that page. Ok… so that means it is possible that we read no rows from the secondary when they existed on the primary before the AG was able to replicate or lock the schema. Why?

Each successful update on the primary replica has already been synchronized to the secondary replica, or that the log records have already been flushed for hardening on the secondary replica. Therefore, the bottleneck should be in the redo process that happens after the log is flushed on the secondary replica.

Bottlenecks. Alright, what are the causes of bottle necks?

Well, the MSDN suggests that long running active transactions, high network latency or low throughput, another workload blocks the redo thread, or redo thread falls behind because of resource contention.

The solution:

Ultimately my solution was to change the data source from the read only secondary to the primary node assigned by the listener on a R/W connection. Now I don’t care if the secondary is lagging behind in this particular instance, but it looks like I have some query tuning to do!

I have added in a form for feedback, it would be very helpful if you would take a few minutes to fill this out. My goal is to adjust the content to best help others resolve their problems.

Did this solve your problem?(required)

How do I check the metadata of my temporal table?

There was an interesting question recently about checking whether a table had been created with DATA_CONSISTENCY_CHECK, if the SYSTEM_VERSION option was set, and how to find the name of the historical tables.

First up, DATA_CONSISTENCY_CHECK.

I have finished studying A LOT on temporal tables for my 70-761 and I knew most of the answers off the top of my head. The question regarding DATA_CONSISTENCY_CHECK and seeing if that value persisted anywhere was a head scratcher though. It seemed no one had asked a question in that way before and I did not find any links or articles that talked about that aspect.

I had to create a temporal table to see the results for myself. In short, DATA_CONSISTENCY_CHECK is a one time action flag when you create or alter a table. It is telling SQL Server to run DBCC CHECKCONSTRAINT. This is validating that the data between the table you want to turn into a temporal table and the historical table receiving records from the temporal table match up. It will not allow a SYSTEM_VERSION of a table if the SysEndTime > SysStartTime.

If you try and look for the value DATA_CONSISTENCY_CHECK is set to, you won’t find it. It’s not persisted after run time, it is just a flag to say “perform a check please”. It is also ran by default without you specifying, so I would say you can leave it out of your typical syntax unless you wanted to declare no check on that constraint.

If you have temporal tables, you can see that it does not exist for yourself:

select * from sys.tables
order by name;

select * from sys.key_constraints;

select * from sys.check_constraints;

select * from sys.sysconstraints;

select definition,name
from sys.check_constraints;

select definition,name
from sys.default_constraints;

If you would like to run the check yourself, you can with this base command. You will want to alter the after portion of DBCC CHECKCONSTRAINTS.

DBCC CHECKCONSTRAINTS WITH ALL_CONSTRAINTS

MSDN on DBCC CHECKCONSTRAINTS

Here is how Microsoft officially describes it:

When creating a link to an existing history table, you can choose to perform a data consistency check. This data consistency check ensures that existing records do not overlap. Performing the data consistency check is the default. Use this argument in conjunction with the PERIOD FOR SYSTEM_TIME and GENERATED ALWAYS AS ROW { START | END } arguments to enable system versioning on a table.

MSDN Source.

Generally, performing the data consistency is recommended whenever the data between the current and history tables may be out of sync, such as when incorporating an existing history table that is populated with history data.

MSDN Source.

Now, is SYSTEM_VERSION currently set? This effectively means you are in temporal table mode. Here’s a quick way to check this:

SELECT name, object_id,temporal_type,temporal_type_desc FROM sys.tables
where temporal_type in (1,2)
order by name

Our last question was “How can I find the name of my history table?” We’re going to use our good friend the sys.tables object once more for this.

SELECT NAME
	,object_id
	,temporal_type
	,temporal_type_desc
FROM sys.tables
WHERE temporal_type_desc = 'History_Table'
ORDER BY NAME

My study guide to the 70-762: Developing SQL Databases

I couldn’t find a spreadsheet that would help track the topics, ETA, and other notes online, so I decided to compile all of my resources into a singular spreadsheet to help me keep track of studying and to help the community for those taking the 70-762. You can copy / make a copy of this and track yourself.

Spreadsheet here!

The tab Topic Resources has been taken from Daniel Calbimonte and his post on MSSQLTips. Huge thanks to him for taking the time to compile all of those resources on those topics!

In the Additional Resources, I’ve listed out links to things I thought are incredibly beneficial to my studying of the 70-762. This also lists out a Time to Study table where you can average how many hours of studying you plan to do on certain days of the week and then it gives you an estimate of the hours necessary to study based on what you estimate each topic will take you to learn. I’ve already filled in the hours for an estimated optimistic and pessimistic goal, as you fill in the actual hours worked you’ll notice it will adjust for actual time spent instead of only estimated time.

The Topic and Hours page lists out the topics as per the test page from Microsoft and breaks it out into a table that you can track hours you spend in each topic. This gives you a manageable approach to figuring out how much time this will take you to complete. I’ve filled in my own estimate of the optimistic and pessimistic guidelines, so feel free to adjust this as you need to match your own knowledge. I figured I would want at least 30 minutes to a hour on each topic even if it’s a brief overview just to refresh myself.

The last page is the planner. This is where I keep track of the date I studied, the high / mid / detail level topics I worked on, the amount of time I worked on it, and there are two extra fields. One for notes if there is something I need to come back to or perhaps I found some additional resources I want to touch base on later. And then there’s the Other Topic column. I’m still reading through the Internal Pros book, so when I log time it’s specific to reading that book rather than any of the categories I’ve listed. I decided when I begin working on the 70-762 study guide I’ll begin filling in time on the topics and making sure I understand the topics to meet the detailed level of the topics listed by Microsoft. This will make for an interesting PowerBI graph later to see how my time was spent studying. Maybe I can use that data to study more efficiently for 70-767!

Happy studying and see you on the other side!

Column encryption, is it table or data specific?

I ran into an interesting problem lately. I have a table that has encrypted data with a key on a column. The data however needs to be a piece of an ETL process and then it has to become unencrypted.

This led me to ask the question: Can I create a table, insert a record, encrypt the record, move the record to various tables, and finally decrypt the record in a new table successfully?

Microsoft calls this column encryption, since this is a property of a table, I’d partially expect for it to be specific to the table that holds the data. The answer is in the MSDN articles if you read it thoroughly however. “Encrypt a Column of Data”. It’s not encrypt the column in the table, nor is it encrypt the column, it’s a column of data that becomes encrypted with a key and cert.

Let’s create a test scenario here and put the idea to the test. I couldn’t find a definitive black and white answer immediately online so I figured we should figure it out ourselves!

-- Create test table source
CREATE TABLE TestETLSource (
	id INT PRIMARY KEY identity(1, 1)
	,FakeData VARCHAR(255)
	,FakeDataEncrypted VARBINARY(128)
	,StartTime DATETIME2(7)
	,EndTime DATETIME2(7)
	)

-- Insert a record
INSERT INTO TestETLSource (
	FakeData
	,StartTime
	,EndTime
	)
VALUES (
	'Testtesttestyo'
	,'2017-01-02 02:02:02.222'
	,'2017-01-04 02:02:02.222'
	)

Results in table

Before we begin messing with the keys and certificates, let’s see what our current setup is.

--Reference queries to find any symmetric keys, asymmetric keys, and Certificates.
SELECT NAME
	,key_length
	,algorithm_desc
	,create_date
	,modify_date
FROM sys.symmetric_keys;

Symmetric Keys

SELECT NAME
,algorithm_desc
FROM sys.asymmetric_keys;

asymmetric keys

SELECT NAME
,subject
,start_date
,expiry_date
FROM sys.certificates;

certificates

My results returned typical MS certificates, nothing I’ve made before. So I’ll go ahead and create a master key and cert.

If you already have a master key or certificate, you can skip this immediate section.

-- Create master key because none exists
CREATE master KEY ENCRYPTION BY password = 'some strong password'

-- Create certificate
CREATE CERTIFICATE FakeCert ENCRYPTION BY PASSWORD = 'some strong password'
	WITH SUBJECT = 'Just some fake data';
GO

Now that we have a master key and certificate, we can create a symmetric key to use that certificate.

-- Create key
CREATE SYMMETRIC KEY FakeKey
WITH ALGORITHM = AES_256 ENCRYPTION BY CERTIFICATE FakeCert;
GO

New cert

Let’s test the key out!

-- Open the symmetric key with which to encrypt the data.  
OPEN SYMMETRIC KEY FakeKey DECRYPTION BY CERTIFICATE FakeCert
WITH password = 'some strong password';

Now that it is working, we can go ahead and encrypt the value in the column FakeData using the symmetric key FakeKey. We’ll save the data in the column FakeDataEncrypted.

UPDATE TestETLSource
SET FakeDataEncrypted = EncryptByKey(Key_GUID('FakeKey'), FakeData);
GO

Modified table

Make sure to close your key in your session when you are done!

CLOSE SYMMETRIC KEY FakeKey;
GO

Now let’s verify the results. First, open the symmetric key with which to decrypt the data:

OPEN SYMMETRIC KEY FakeKey DECRYPTION BY CERTIFICATE FakeCert
WITH password = 'some strong password';

Now let’s select the FakeData and FakeDataEncrypted column. If the decryption worked, the data will match the decrypted value.

SELECT FakeData
,CONVERT(NVARCHAR, DecryptByKey(FakeDataEncrypted)) AS 'Decrypted Data'
FROM TestETLSource;

Incorrect results

Notice how it doesn’t match? This is due to the nvarchar data type.

SELECT FakeData
,CONVERT(VARCHAR, DecryptByKey(FakeDataEncrypted)) AS 'Decrypted Data'
FROM TestETLSource;

Correct results

Much better! Don’t forget to close the key when you’re done again.

-- Close the key now that we are not using it.
CLOSE SYMMETRIC KEY FakeKey;
GO

Now that we are done testing the key and encrypting the data, we can begin with our real test. Let’s create our stage and destination tables.

-- Create test stage table
CREATE TABLE TestETLStage (
Stageid INT PRIMARY KEY identity(1, 1)
,id INT
,FakeData VARCHAR(255)
,FakeDataEncrypted VARBINARY(128)
,StartTime DATETIME2(7)
,EndTime DATETIME2(7)
);

-- Create test destination table
CREATE TABLE TestETLDest (
Destid INT PRIMARY KEY identity(1, 1)
,Stageid INT
,id INT
,FakeData VARCHAR(255)
,FakeDataEncrypted VARBINARY(128)
,StartTime DATETIME2(7)
,EndTime DATETIME2(7)
);

Now we’ll take the record and move it into our staging table to perform a transformation on the data.

-- Insert records into ETL table
INSERT INTO TestETLStage (
id
,FakeData
,FakeDataEncrypted
,StartTime
,endtime
)
SELECT id
,fakedata
,FakeDataEncrypted
,starttime
,endtime
FROM dbo.TestETLSource;

Stage table

Now we’ll perform an update on our stage data.

-- Perform ETL
UPDATE TestETLStage
SET EndTime = (
SELECT getdate()
);

Updated stage table

Now we’ll take that data out of the stage table and move it over to our destination table.

-- Insert into Dest table
INSERT INTO TestETLDest (
Stageid
,id
,FakeData
,FakeDataEncrypted
,StartTime
,endtime
)
SELECT Stageid
,id
,fakedata
,FakeDataEncrypted
,starttime
,endtime
FROM dbo.TestETLStage;

Dest table results

Here’s where the rubber meets the road. Are our changes from the ETL table valid? Can we still unencrypt the correct values from the column now that we’ve changed tables and updated values in the rows?

OPEN SYMMETRIC KEY FakeKey DECRYPTION BY CERTIFICATE FakeCert
WITH password = 'some strong password';

SELECT FakeData
,CONVERT(VARCHAR, DecryptByKey(FakeDataEncrypted)) AS 'Decrypted Data'
,FakeDataEncrypted
,Destid
,Stageid
,id
,starttime
,endtime
FROM TestETLDest;

Successful results

Great success!

-- Close the key now that we are not using it.
CLOSE SYMMETRIC KEY FakeKey;
GO

As my testing has shown, you can encrypt a column, move the data around different tables, and still decrypt it later. The encryption is on the column itself, not the table.

References used:

Connecting PowerBI to RaspberryPi (MariaDB Engine)

It’s been a little while since I had PowerBI working against a remote repository on my Raspberry Pi and I had some troubles recalling how to configure it. As a result, it was an excellent experience troubleshooting it. This also led to me realizing, there’s no guide to configuring PowerBI to MySQL(MariaDB) on a Raspberry Pi. There are a lot of similar posts, but the problem with most of scenarios I found online is that the MySQL DB was most frequently running locally with PowerBI, whereas in my circumstance it was remote.

The primary issue I ran into was seeing “Details: MySQL: Unable to connect to any of the specified MySQL hosts.” I was able to connect to my instance with phpmyadmin and also with MySQL workbench, so I felt I was able to connect with PowerBI here too. I installed MariaDB Workbench just to test and see if I could at least access my instance with that tool. It turned out I ran into a similar issue. The problem is twofold.

1. The bind-address for my MySQL instance was set, not allowing remote IP calls.
2. I needed a user with a correct domain to access the box with a SQL login.

To resolve the first issue, I had to find my config file. MariaDB is set up to import four different config files, the one I edited was /etc/mysql/mariadb.conf.d/50-server.cnf. A quick # in front of #bind-address=127.0.0.1 and a sudo /etc/init.d/mysql restart got the IP fixed up quick.

To resolve the second issue, I created a user with extra permissions. IMPORTANT NOTE: This is NOT best practice. My Pi is on a private network and does not host any secure data. So I’d recommend taking this as an example and working with it to fit your security needs.

I create a user with admin privileges that has access open to any IP. So I’ll be able to remote into the Pi from any device on that private network with that login now.

CREATE USER 'someuseraccount'@'%' IDENTIFIED BY 'some_pass';
GRANT ALL PRIVILEGES ON *.* TO 'someuseraccount'@'%' WITH GRANT OPTION;

In PowerBI, I went to Get Data -> MySQL DB -> put in my server as 192.168.1.155 -> the name of my Database -> and set the login for Database and used the user and password I had just created in the last step.

Now I’ve seen where other users had issues and it was due to a driver issue, so make sure you have your driver updated for MySQL.

Here is where I obtained the idea for the bind in step 1.

Here is where I obtained the idea for the user in step 2.

MySQL (Really, MariaDB) on the Raspberry Pi!

I’ve had a few people ask me lately about how I got MySQL running on my Raspberry Pi, so here’s my guide on how to go from almost 0 to 100. This guide assumes you understand the basics of Linux and Raspberry Pi’s. You’ve probably done a few different tutorials and you have tinkered with it before.

1. Download Raspbian.

Raspbian

2. Wipe the sd card and format it. (SDFormatter is a free tool you can use.)

3. Unzip the file.

4. Write the image to disk. (Win32 Disk Imager is a free tool you can use.)

5. Copy a wpa_supplicant.conf file and ssh.txt over to the boot partition of the sd card. You’ll need to edit the conf to contain your current wifi information, the below is a good example of a file you can modify and use. The ssh.text file is just a blank file with the name ssh so the Pi knows to enable SSH on boot.

country=US
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
    ssid="YOUR WIFI IS GONNA GO HERE"
    scan_ssid=1
    psk="YOUR PASSWORD IS GOING HERE AND IF YOU DON'T USE ONE THEN SHAME ON YOU"
    key_mgmt=WPA-PSK
}

6. SSH into the PI. (Putty is a free tool you can use.)

7. Run this on your Pi to get RDP working. It’s just a nice to have but you can skip this if you won’t use it, we won’t be using it in the tutorial.

#remove the following packages : xrdp, vnc4server, tightvncserver
sudo apt-get remove xrdp vnc4server tightvncserver

#install tightvncserver followed by xrdp
sudo apt-get install tightvncserver

sudo apt-get install xrdp

8. Now update the Pi so it’s current.

#update your system's package list by entering the following command:
sudo apt-get update

#Next, upgrade all your installed packages to their latest versions with the command:
sudo apt-get dist-upgrade

8. Get My-SQL.

sudo apt-get install mysql-server

9. Configure security for MySQL, this is going to ask you about creating a root user. Make sure you give it a password, MyPHPAdmin will NOT work if you don’t make your root user with a password. I made this mistake the first time and then installed phpmyadmin, this was where the root password got stored and I had to re-run the configuration for phpmyadmin by following the steps posted below: (Credit to Amigo Chan on askUbuntu) (For anyone googling this error, I saw errors such as “not enough privilege to view users. phpmyadmin” and “access denied for user ‘root’@’localhost’ TO DATABASE phpmyadmin”.)

sudo mysql_secure_installation

10. Install bindings for python for later use.

sudo apt-get install python-mysqldb

11. Create a mysql user.

sudo mysql -u root -p

GRANT ALL PRIVILEGES ON mydb.* TO 'USERNAME'@'localhost' IDENTIFIED BY 'PASSWORD GOES HERE';

quit

12. Get apache2 up and running.

sudo apt-get install apache2 php5 libapache2-mod-php5

13. Get phpmyadmin up and running.

sudo apt-get install phpmyadmin

14. Configure apache2.

sudo nano /etc/apache2/apache2.conf

15. Restart apache2.

sudo /etc/init.d/apache2 restart

16. Now you need to go to a webbrowser and put in the IP of the Pi and /phpmyadmin at the end. Example(192.168.1.155/phpmyadmin)

And Voila! You should be in!

Additional resources used:
https://pimylifeup.com/raspberry-pi-mysql-phpmyadmin/