# Safe resource allocation and deallocation in C++ (C++11 and C++03)

## Introduction

Back then before 2010, one always used Valgrind to make sure that their C++ program isn't leaking memory. There's a whole modern way of using C++ that doesn't even make you need that. If you're using a C++11 program (and even if you're not), and you still require to keep checking whether your code is leaking memory, then you're most likely doing it all wrong. Keep reading if you wanna know the right way that saves you the trouble having to memory check all the time, and uses true object-oriented programming.

Basically this artice is about the concept named RAII, Resource Allocation Is Initialization, and why it's important from my perspective.

## The golden rules of why you should do this

My Golden Rules in C++ development:

1. Humans do mistakes, and just an arrogant would claim that he doesn't (why else do we have something called "error handling"?)
2. Things can go wrong, no matter how robust your program is, and how careful you are, and you can't plan every possible outcome
3. A problem-prone program is no better than a program with a problem; so why bother writing a program with problems?

Once you embrace these 3 rules, you'll never write bad code, because your code will be ready for worst case scenario.

In the last few years, I never found a single lost byte in my Valgrind-analysed programs; and I'm talking about big projects, not in the single class level. The difference will be clear soon.

I'm going to start from simple cases, up to more complicated scenarios.

### Scenario 1: If you're creating and deleting objects under pointers

Consider the following example:

void DoSomethingElse(int* var)
{
std::cout << *x << std::endl;
//... do stuff with x
}
void DoSomething()
{
int* x = new int;
DoSomethingElse(x);
delete x;
}


Let me make this as clear as possible: If you ever, ever use a new followed by delete… you're breaking all the 3 rules we made up there. Why? Here are the rules and how you're breaking them:

1. You may do the mistake of forgetting to write that delete
2. DoSomethingElse() might throw an exception, and hence that delete may not be called.
3. This is a problem prone design, so it's a program with a problem.

The right way to do this: Smart pointers!

What are smart pointers?

I'm sure you've heard of them before, but if you haven't, the idea is very simple. If you define, for example, an integer like this:

void SomeFunction()
{
int i = 0;
// do stuff with i
} //here you're going out of the scope of the function


You never worry about deleting i. The reason is that once i goes out of scope, it's deleted automatically (through a destructor). Smart pointers are just the same. They wrap your pointer, such that they are deleted once they are out of scope. Let's look at our function DoSomething() again with smart pointers:

void DoSomething()
{
std::unique_ptr<int> x(new int); //line 1
DoSomethingElse(x.get());        //line 2
} //once gone out of scope, x will be deleted automatically,
//the destructor of unique_ptr will delete the integer


That's all the change you have to do, and you're done! In line 1, you're creating a unique_ptr, which will encapsulate your pointer. The reason why it's "unique" will become soon clear. Once the a unique_ptr goes out of scope, it'll delete the object under it. So you don't have to worry! This way, the 3 Golden Rules are served. In line 2, we're using x.get() instead of x, because the get() method will return the raw pointer stored inside the unique_ptr. If you'd like to delete the object manually, use the method x.reset(). The method reset() can take a parameter to another pointer, or can be empty to become nullptr.

PS: unique_ptr is C++11. If you're using C++03, you could use unique_ptr from the boost library

Why is it called "unique"?

Generally, multiple pointers can point to the same object. So, going back to the initial example, the following is a possible scenario:

void DoSomething()
{
int* x = new int(1);
int* y = x; //now x and y, both, point to the same integer
std::cout << *x << "\t" << *y << std::endl; //both will print 1
*x = *x + 1; //add 1 to the object under x
std::cout << *x << "\t" << *y << std::endl; //both will print 2
delete x; //you delete only 1 object, not 2!
}


But can you do this with unique_ptr? The answer is *no*! That's why it's called unique, because it's a pointer that holds complete *ownership* of the object under it (the integer, in our case), and it's unique in that. If you try to do this:

void DoSomething()
{
std::unique_ptr<int> x(new int);
std::unique_ptr<int> y = x; //compile error!
} 


your program won't compile! Think about it… if this were to compile, who should delete the pointer when the function ends, x or y? It's ambiguous and dangerous. In fact, this is exactly why auto_ptr was deprecated in C++11. It allowed the operation mentioned before, which effectively *moved* the object under it. This was dangerous and unclear semantically, which is why it's deprecated.

On the other hand, you can move an object from one unique_ptr to another! Here's how:

void DoSomething()
{
std::unique_ptr<int> x(new int);
std::unique_ptr<int> y = std::move(x);
//now x is empty, and y has the integer,
//and y is responsible for deleting the integer
}


with std::move(x), you convert x to an rvalue reference, indicating that it can be safely moved/modified.

##### Shared pointers

Since we established that unique pointers are "unique", let's introduce the solution to the case where multiple smart pointers can point to the same object. The answer is: shared_ptr. Here's the same example:

void DoSomething()
{
std::shared_ptr<int> x(new int(2)); //the value of *x is 2
std::shared_ptr<int> y = x; //this is valid!
//now both x and y point to the integer
}


Who is responsible for deleting the object now? x or y? Generally, any of them! The way shared pointers work is that they have a common reference counter. They count how many shared_ptrs point to the same object, and once the counter goes to zero (i.e., the last object goes out of scope), the last object is responsible for deleting.

In fact, using the new operator manually is highly discouraged. The alternative is use make_shared, which covers some corner cases of possible memory leaks. For example, if the constructor of the class use in shared_ptr has a multi-parameter constructor that may throw an exception. Here's how make_shared is used:

void DoSomething()
{
std::shared_ptr<int> x = std::make_shared<int>(2); //the value of *x is 2
std::shared_ptr<int> y = x; //this is valid!
//now both x and y point to the integer
}


Note: Shared pointers change a fundamental aspect of C++, which is "ownership". When using shared pointers, it may be easy to lose track of the object. This is a common problem in asynchronous applications. This is a story for another day though.

Note 2: The reference counter of shared_ptr is thread-safe. You can pass it among threads with no problems. However, the thread-safety of the underlying object it points to is your responsibility.

### Scenario 2: I don't have C++11 and I can't use boost

This is a common scenario in organizations that maintain very old software. The solution to this is very easy. Write your own smart pointer class. How hard can it be? Here's a simple quick-and-dirty example that works:

template <typename T>
class SmartPtr
{
T* ptr;
// disable copying by making assignment and copy-construction private
SmartPtr(const SmartPtr& other) {}
SmartPtr& operator=(const SmartPtr& other) {return SmartPtr();}
public:
SmartPtr(T* the_ptr = NULL)
{
ptr = NULL;
reset(the_ptr);
}
~SmartPtr()
{
reset();
}
void reset(T* the_ptr = NULL)
{
if(ptr != NULL)
{
delete ptr;
}
ptr = the_ptr;
}
T* get() const //get the pointer
{
return ptr;
}
T& operator*()
{
return *ptr;
}
T* release() //release ownership of the pointer
{
T* ptr_to_return = ptr;
ptr = NULL;
return ptr_to_return;
}
};


and that's it! The method release(), I haven't explained. It simply releases the pointer without deleting it. So it's a way to tell the unique_ptr: "Give me the pointer, and forget about deleting it; I'll take care of that myself".

You can now use this class exactly like you use unique_ptr. Creating your own shared_ptr is a little more complicated though, and depends on your needs. Here's the questions you need to ask yourself on how to design this:

1. Do you need multithreading support? shared_ptr supports thread-safe reference counting.
2. Do you need to just count references, or also track them? For some cases, one might need to track all references with something like a vector of references or a map.
3. Do you need to support release()? Releasing is not supported in shared_ptr, since it depends on reference counting, there's no way to tell other instances to release.

More requirements will require more work, especially that prior to C++11, multithreading was not in the C++ standard, meaning that you're gonna have to use system-specific C++.

For a strictly single-threaded application with C++03, I created a shared pointer implementation that supports releasing. Here's the source code.

### Scenario 3: Enable a flag, do something, then disable it again

Consider the following code, which is common in GUI applications:

void GUIClass::addTheFiles(const std::vector<FileType>& files)
{
this->disableButton();
for(unsigned i = 0; i < files.size(); i++)
{
}
this->enableButton();
}


While this looks legitimate way to do things, it's not. This is absolutely no different that the pointer situation. What if adding fails? Either because of a memory problem, or because of some exception? The function will exit without reenabling that button, and your program will become unusable and the user will probably have to restart it.

Solution? Just like before. Don't do it yourself, and get the destructor of some class to do it for you. Let's do this. What do we need? We need a class that will call a function with a reference to some variable on exit. Consider the following class:

class AutoHandle
{
std::function<void()> func;
bool done = false; //used to make sure the call is done only once
// disable copying and moving
AutoHandle(const AutoHandle& other) = delete;
AutoHandle& operator=(const AutoHandle& other) = delete;
AutoHandle(AutoHandle&& other) = delete;
AutoHandle& operator=(AutoHandle&& other) = delete;
public:
AutoHandle(const std::function<void()>& the_func)
{
func = the_func;
}
void doCall()
{
if(!done)
{
func();
done = true;
}
}
~AutoHandle()
{
doCall();
}
};


Let's use it!

void GUIClass::addTheFiles(const std::vector<FileType>& files)
{
this->disableButton();
AutoHandle ah([this](){this->enableButton();}); //lambda function that contains the function to be called on exit
for(unsigned i = 0; i < files.size(); i++)
{
}
} //Now, the function enableButton() will definitely be called definitely on exit.


This way, you guarantee that enableButton() will be called when the function exits. This whole thing here is C++11, but doing it in C++03 is not impossible, though I completely sympathize with you if you feel it's too much work for such a simple task, because:

1. Since there's no std::function in C++03, we're gonna have to make that class template that accepts functors (function objects)
2. Since there's no lambda functions in C++03, we're gonna have to make the call a new functor for every case (depending on how much you would like to toy with templates, also another big topic)

Just for completeness, here's how you could use AutoHandle in C++03 with a Functor:

template <typename CallFunctor, typename T>
class AutoHandle
{
bool done; //used to make sure the call is done only once
CallFunctor func;
// disable copying by making assignment and copy-construction private
AutoHandle(const AutoHandle& other) {}
AutoHandle& operator=(const AutoHandle& other) {return *this;}
public:
AutoHandle(T* caller) : func(CallFunctor(caller))
{
done = false;
}
void doCall()
{
if(!done)
{
func();
done = true;
}
}
~AutoHandle()
{
doCall();
}
};

struct DoEnableButtonFunctor
{
GUIClass* this_ptr;
DoEnableButtonFunctor(GUIClass* thisPtr)
{
this_ptr = thisPtr;
}
void operator()()
{
this_ptr->enableButton();
}
};


Here's how you can use this:

void GUIClass::addTheFiles(const std::vector<FileType>& files)
{
this->disableButton();
AutoHandle<DoEnableButtonFunctor,GUIClass> ah(this); //functor will be called on exit
for(unsigned i = 0; i < files.size(); i++)
{
}
} //Now, the function enableButton() will definitely be called definitely on exit.


Again, writing a functor for every case is a little painful, but depending on the specific case, you may decide. However, in C++11 projects, there's no excuse. You can easily make your code way more reliable with lambdas.

Remember, you're not bound to the destructor to do the calls. You can also call doCall() yourself anywhere (equivalent to reset() in unique_ptr). But the destructor will *guarantee* that the worst case scenario is covered if something went wrong.

### Scenario 4: Opening and closing resources

This could even be more dangerous than the previous cases. Consider the following:

void ReadData()
{
int handle = OpenSerialPort("COM3");
Close(handle);
}


This form is quite common in old libraries. I faced such a format with the HDF5 library. If you read the previous sections, you'll get the problem with such usage and the idea on how to fix it. It's all the same. You *should never* close resources manually. For my HDF5 problem, I wrote a SmartHandle class that guarantees that HDF5 resources are correctly closed. Find it here. Of course, the formal right way to do this is to write a whole wrapper for the library. This may be an over-kill depending on your project constraints.

#### Notes on Valgrind

If you follow these rules, you'll be 100% safe with the resources you use. You rarely will ever need to use Valgrind. However, when you write the classes that we talked about (such as AutoHandle, SmartPtr, etc), it's very, very important not only to test it with Valgrind, but also to write good tests that will cover every corner case. Because once you do these classes right, you never have to worry about them. If you do them wrong, the consequences could be catastrophic. Surprising? Welcome to object-orient programming! This is exactly what "separation of concerns" mean.

## Conclusion

Whenever you have to do, then undo something, then keep in mind that you shouldn't have this manually done. Sometimes it's safe and trivial, but many times it may lead to simply bad and error-prone design. I covered a few cases and different ways to tackle the issue. By following these examples, I guarantee that your code will become more compact (given that you're using C++11) and way more reliable.

# Can I use Qt Creator without Qt?

Qt Creator… the best IDE for developing C and C++ I’ve ever seen in my life. Since I like it that much, I’m sharing some of what I know about it.

DISCLAIMER: I’m not a lawyer, so don’t hold me liable for anything I say here about licensing. Anything you do is your own responsibility.

### What does using Qt Creator without Qt mean?

It simply means that you don’t have to install or use the Qt libraries, including qmake. There are many reasons why that may be the case:

• You may have an issue with licensing since qmake is LGPL licensed
• You may not have the possibility to install qmake alone without its whole gear, as is the case in Windows
• You may not want to compile the whole Qt libraries if the pre-compiled versions that work with your compiler is not available

While I love Qt and use it all the time, I follow the principle of decoupling my projects from libraries if I don’t use them. But since I love Qt Creator, I still want to use it! Reasons will become clear below.

### What are the ingredients of this recipe?

1. Qt Creator
2. CMake
3. A compiler (gcc, MinGW or Visual Studio, or anything else)

## Basic steps

After installing the 3 ingredients, make sure that Qt Creator recognized that CMake exists on the computer. The next picture (click on it to magnify it) is how it looks like if CMake was found. If it doesn’t find CMake, on Windows, most likely the reason is that you chose in the installation not to add CMake to the system’s PATH. Eventually, you can just add it manually if it can’t be found automatically.

Next, make sure that Qt recognizes the compiler and the debugger as you see in the next pictures. Again, you can add them manually.

For Visual Studio to be found automatically, I guess the environment variable VS140COMNTOOLS has to be defined. “140” is version 14 of Visual Studio, which is the version number of Visual Studio 2015. It’s defined by default when Visual Studio is installed. For MinGW to be detected automatically, the bin of MinGW has to be in PATH.

Go to the “Kits” tab. If you have Qt libraries installed and configured, you’ll see them there. I don’t like Qt SDK, and I usually compile my own versions of Qt. You’ll see that in the next screenshot.

What you see in the next screenshot are 3 kits that use Qt libraries, and another that do not. The free version of Visual Studio (2012 and later) comes with both 32-bit and 64-bit compilers. You can choose any one of them, or both (like I do). I don’t use MinGW often on Windows, so I install only 1 version of it (I use MinGW on Windows primarily because it offers the “-pedantic" flag, which gives the chance to experiment with C++ standard-approved features).

Now click “Add”, and choose the fields as shown below (most importantly, configure CMake correctly, it’s a little tricky, and the error on Qt Creator doesn’t tell you what you did wrong)

If you’re getting unexplained errors when running CMake, follow the following instructions carefully

### Visual Studio

The following is a screenshot of how Visual Studio configuration should look like (screenshot is for 64-bit)

Set the following

• For the 32-bit compiler: Choose the compiler with (x86)
• For the 64-bit compiler: Choose the compiler with (x86_am64) or (amd64)
• Choose “None” for Qt version
• Choose the correct debugger
• Most importantly: After choosing CMake from the drop-down list, make sure that CMake generator is chosen to be “NMake Makefiles”, and the Extra generator to be CodeBlocks

The last piece of information is the invitation to all kinds of problems. If running CMake doesn’t work, you most likely configured that part incorrectly.

### MinGW

The following is a screenshot of how MinGW configuration should look like:

• Choose “None” for Qt version
• Choose the correct debugger
• Most importantly: After choosing CMake from the drop-down list, make sure that CMake generator is chosen to be “MinGW Makefiles”, and the Extra generator to be CodeBlocks

The last piece of information is the invitation to all kinds of problems. If running CMake doesn’t work, you most likely configured that part incorrectly.

I have to say that from my experience, Qt Creator some times fails to run CMake with MinGW with no good reason. I fix this by switching the “Extra generator” to “CodeLite”, and back to “CodeBlocks”. I’m currently using Qt Creator 4.2.0. It might be a bug?

### gcc/g++

It’s very easy to get gcc/g++ to work. The following is a screenshot:

• Choose “None” for Qt version
• Most importantly: After choosing CMake from the drop-down list, make sure that CMake generator is chosen to be “CodeBlocks – Unix Makefiles”.  It’s chosen by default, so nothing to worry about.

And you’re done!

## What can I do with CMake + Qt Creator?

### No ultimate dependence on Qt Creator!

Visual Studio solutions don’t work without Visual Studio. Netbeans configuration is stored in “netbeans project directory”. it’s very annoying that every IDE has its own weird format! I never stop hearing people complaining about porting their programs to other systems and having problems because of this.

One of the things I like most about Qt Creator is the fact that it doesn’t have any special project files for itself. The Makefile itself (CMake file, in this case, or qmake otherwise) is the project file. This ensures not only 100% portability (since both make systems are cross-platform) but also independence of Qt Creator itself. In the future, if the whole Qt Creator project goes down, your project won’t be affected at all.

### What if I want to use Qt Creator just as an IDE without having to build a project through it?

I had the “luxury” of getting a project from a space agency to add some features to it, where they had their own build system. I wasn’t able to use Qt Creator to build the project, but…

But why use Qt Creator? Simply because you’ll get

• Syntax highlighting
• target functions and classes following
• advanced refactoring options (like renaming classes, functions, etc)
• Search capabilities in source files.
• Type hierarchy navigation with a click
• And lots more!

With all these, yet, while the project contains a few thousand source files, I was able to add *all* files and parse them with Qt using a few CMake lines. And it was fast enough to handle all that!

How to do it? How to add all these files in one step?

CMake supports recursive adding of source files. Consider the following CMake file (always called CMakeLists.txt):

cmake_minimum_required(VERSION 3.0)
PROJECT(MyProject)

file(GLOB_RECURSE MySrcFiles
"${CMAKE_SOURCE_DIR}/src/*.cpp" "${CMAKE_SOURCE_DIR}/src/*.h"
)

${MySrcFiles} ) add_executable(MyExecutable main.cpp) target_link_libraries(MyExecutable MySrcFilesLib)  The first two lines are obvious. The “file" part recursively finds all the files under the directory mentioned and saves them in the variable ${MySrcFiles}. The variable ${CMAKE_SOURCE_DIR} is basically the directory where you cmake file “CMakeLists.txt" is located. Feel free to set any directory you find fit. The “add_library” part creates a library from the source (and header) files saved in ${MySrcFiles}.

The “add_executable” part  “creates” the executable, which is then linked to the libraries you added.

That last linking part is not necessary if you don’t want to build in Qt Creator.

With such a simple CMake file, Qt Creator was smart enough to add all the source files, parse them, and give me all the functionality I needed to edit that project successfully.

# SSH proxy with Putty that reconnects automatically

### Introduction

Putty can be used to tunnel your connection through your SSH server. It creates a SOCKS proxy that you can use anywhere. The annoying part there is that if Putty disconnects for any reason, you’ll have to reestablish the connection manually. If you’re happy reconnecting putty manually all the time, then this article is not for you. Otherwise, keep reading to see how I managed to find a fair solution to this problem.

### Ingredients

For this recipe, you need the following

1. Putty, of course. You could use “Putty Tray”, which supports being minimized to taskbar.
2. Python 3

I understand that you may not have or use Python, but the solution I used depends on it, unfortunately. You could download Miniconda 3,  which is a smaller version of Anaconda 3. Anaconda is my recommended Python release.

### How does this work in a nutshell?

The idea is very simple.

1. Create a Putty session configuration that suites you
2. Configure the session to close on disconnect
3. Create a script that will relaunch your putty session on exit
4. Make sure that the launcher you’re gonna use doesn’t keep a command prompt window open (which is why I’m using Python; pythonw.exe solves this issue).

### Configuring Putty

To create a putty tunnel proxy, load your favorite session configuration, and then go to the tunnels configuration, and use the following settings, assuming you want the SOCKS proxy port to be 5150. Choose any port you like.

After connecting with this configuration, this creates a SOCKS that you can connect to with the loopback IP-Address, 127.0.0.1 at port 5150.

One more important thing to configure in putty, is to set putty to exit on failure. This is important because we’re gonna set putty to reconnect through a program that detects that it exited to start it again.

### Configuring Python and the launch script

Assuming you installed Python 3 and included in your PATH, now you have to install a package called tendo. This package is used to prevent running multiple instances of the program.

To install it, first, run the command prompt of Windows (in case Python is installed directly on the system drive, C:\, you have to run it as administrator). In the command prompt, to ensure that Python is working fine, run:

python -V

If this responds with no error and gives a message like:

Python 3.5.1 :: Anaconda 4.1.0 (32-bit)

Then you’re good to go! Otherwise, make sure that Python is added to PATH, and try again.

To install tendo, simply run in command prompt

pip install tendo

After that, in the directory of putty, write this script to a file:


import subprocess
from tendo import singleton
import time

me = singleton.SingleInstance() #exits if another instance exists

while True:
print("Starting putty session...")
print("Putty session closed... restarting...")
time.sleep(5)   #sleep to avoid infinitely fast restarting if no connection is present

The name “mysession” is the name of your session in putty. Replace it with your session name.

This script simply checks first that the current instance is the only instance, and then runs an infinite loop that keeps running the program every time it exits. So we made putty exit on disconnect, and this program will just run infinitely

Save this script to some file like “MyLoop.pyw”.

### Testing the loop

Python has two executables. First is “python.exe”, and the other one is “pythonw.exe”. The difference is quite simple. The first one, “python.exe”, runs your script as a terminal program. The second one, “pythonw.exe”, runs your script without a terminal. It’s designed for GUI applications. Now “python.exe” is not what we need, but it still is useful for debugging the script. So whenever you have a problem or when you want to run this for the first time, switch to/use “python.exe”. Once you’re done and everything looks fine, switch to “pythonw.exe”.

### Final step: The execution shortcut

This is not necessary, but it makes things easy. It makes it easy to control whether you want to use “python.exe” or “pythonw”. It makes it also possible to make your script handy. Simple create a shortcut to “python.exe” or “pythonw.exe”, and the first command line parameter should be your script. Remember that “Start in” has to be the directory, where Putty is located. Following picture is an example of how that shortcut should look like.

And you’re good to go! Start with “python.exe”, and once it works, and you find that every time you exit putty or a disconnection happens it relaunches it, switch to “pythone.exe”, and you’re done.

### Final notes

This is not a super-fancy solution. This is a solution that’ll get you through and get the job done. If you want to exit the looping script, you’ll have to kill Python from your task-manager.

You may create a fancy taskbar app that’ll do the looping and exits, which I would’ve done if I had the time. So please share it if you do! You can use PyQt or PySide for that.

### Conclusion

With this, you’ll keep reconnecting on disconnect, and you can get all your software to use your ssh server as a SOCKS proxy. Cheers!

# Tunnel through https to your ssh server, and bypass all firewalls – The perfect tunnel! (HAProxy + socat)

### Disclaimer

Perhaps there’s no way to emphasize this more, but I don’t encourage violation of corporate policy. I do this stuff for fun, as I love programming and I love automating my life and gaining more convenience and control with technology. I’m not responsible for any problem you might get with your boss in your job for using this against your company’s firewall, or any similar problem for that matter.

## Introduction

I was in a hotel in Hannover, when I tried to access my server’s ssh. My ssh client, Putty, gave this disappointing message

At first I got scared as I thought my server is down, but then I visited the websites of that server, and they were fine. After some investigation, I found that my hotel blocks any access to many ports, including port 22, i.e., ssh. Did this mean that I won’t have access to my server during my trip? Not really!

I assume you’re using a Windows client, but in case you’re using linux, the changes you have to do are minimal, and I provide side-by-side how to do the same on a linux client. Let me know if you have a problem with any of this.

## Tunneling mechanism, and problems with other methods that are already available

There are software that does something similar for you automatically, like sslh, but there’s a problem there.

###### What does sslh do?

When you install sslh on your server, you choose, for example, port 443 for it. Port 443 is normally for http-ssl (https), that’s normally taken by your webserver. So you change also your webserver’s port to some arbitrary port, say 22443. Then, say you want to connect to that server: sslh analyzes and detects whether the incoming network packets are ssh or http. If the packets are ssh, it forwards them to port 22. If the packets looks like https, it forwards them to the dummy port you chose, which is 22443 as we assumed.

###### What’s the problem with sslh, and similar programs?

It all depends on how sophisticated the firewall you’re fighting is. Some firewalls are mediocre, and they just blindly open port 443, and you can do your sslh trick there and everything will work fine. But smart firewalls are not that dull; they analyze your packets and then judge whether you’re allowed to be connected. Hence, a smart firewall will detect that you’re trying to tunnel ssh, and will stop you!

###### How do we solve this problem?

The solution is: Masquerade the ssh packets inside an https connection, hence, the firewall will have to do a man-in-the-middle attack in order to know what you’re trying to do. This will never happen! Hence, I call this solution: “The perfect solution“.

## How to create the tunnel?

I use HAProxy for this purpose. You need that on your server. It’s available in standard linux systems. In Debian and Ubuntu, you can install it using

sudo apt-get install haproxy

You will need “socat” on your client to connect to this tunnel. This comes later after setting up HAProxy.

###### How does HAProxy work?

I don’t have a PhD in HAProxy, it’s fairly a complicated program that can be used for many purposes, including load balancing and simple internal proxying between different ports, and I use it only for this purpose. Let me give a brief explanation on how it works. HAProxy uses the model of frontends and backends. A frontend is what a client sees. You set a port there, and a communication mode (tcp, for example). You tell also a frontend “where these packets should go”, based on some conditions (called ACL, Access Control Lists). You choose to which backend the packets have to go. The backend contains information about the target local port. So in short words, you tell HAProxy where to forward these packets from the frontend to the backend based on some conditions.

###### A little complication if you use https websites on the same server

If you use https webserver on the same machine, you’ll have a problem. The problem is that you’ll need to check whether the packets are ssh before decrypting them, because once you decrypt them, you can’t use them as non-encrypted again (hence haproxy doesn’t support forking encrypted and decrypted packets side-by-side). This is because you choose to decrypt in your frontend. That’s why we use SNI (Server Name Indication) and do a trick:

• If there’s no SNI (no server name, just IP address), then forward to ssh
• If server name used is ssh.example.com (some subdomain you choose), then forward to ssh (optional)
• If anything else is the case, forward to the https web server port

We also use 2-frontends. The first one is the main one, and the second is a dummy frontend, and is only used to decrypt the ssh connection’s https masquerade. HAProxy decrypts only in frontends.

## Let’s do it!

The configuration file of HAProxy in Debian/Ubuntu is

/etc/haproxy/haproxy.cfg

You could use nano, vi or vim to edit it (you definitely have to be root or use sudo). For example:

sudo nano /etc/haproxy/haproxy.cfg
###### Assumptions
1. Your main https port is 443
2. Your main ssh port is 22
3. Your https webserver is now on 22443
4. The dummy ssh port is 22222 (used just for decryption, it doesn’t matter what you put it)
###### Main frontend

This is the frontend that will take care of the main port (supposedly 443). Everything after a sharp sign (#) on a line is a comment.

#here's a definition of a frontend. You always give frontends and backends a name
frontend TheMainSSLPort
mode tcp
option tcplog
bind 0.0.0.0:443 #listen to port 443 under all ip-addresses

timeout client 5h #timeout is quite important, so that you don't get disconnected on idle
option clitcpka

tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }

#here you define the backend you wanna use. The second parameter is the backend name
use_backend sshDecrypt if !{ req_ssl_sni -m found } #if no SNI is given, then go to SSH
use_backend sshDecrypt if { req_ssl_sni -i ssh.exmple.com } #if SNI is ssh.example.com, also go to ssh

default_backend sslWebServerPort #if none of the above apply, then this is https

In the previous configuration, we have two paths for the packets, i.e., two backends:

1. If the connection is ssh, the backend named “sshDecrypt” will be used.
2. If the connection is https, the backend named “sslWebServerPort” will be used.
###### The https backend

I put this here first because it’s easier. All you have to do here is forward the packets to your webserver’s port, which we assumed to be port 22433. The following is the relevant configuration:

backend sslWebServerPort
mode tcp
option tcplog
server local_https_server 127.0.0.1:22443 #forward to this server, port 22443

Now the https part is done. Let’s work on the ssh part.

###### The ssh front- and backends

We’ll have to use a trick, as mentioned before, to get this to work. Once a judgment is done for packets to go to ssh (using SNI), the packets have to be decrypted. This is not possible in a backend, thus we use a backend to forward the packets to a dummy frontend that decrypts the packets, and then send these to another backend to send the packets to the ssh server.

backend sshToDecryptor
mode tcp
option tcplog
server sshDecFrontend 127.0.0.1:22222
timeout server 5h

This forwards the packets to port 22222. Now we build a frontend at that port that decrypts the packets.

frontend sshDecyprtionPort
timeout client 5h
option clitcpka

bind 0.0.0.0:22222 ssl crt /path/to/combined/certs.pem no-sslv3
mode tcp
option tcplog

tcp-request inspect-delay 5s
tcp-request content accept if HTTP

default_backend sshServ #forward to the ssh server backend

The file /path/to/combined/certs.pem has to contain your private key, certificate and certificate chain in one file of your SSL. Concatenate them all in one file.

Finally, the back end to the ssh server:

backend sshServ
mode tcp
option tcplog
server sshServer1 127.0.0.1:22
timeout server 5h

That’s all you need to create the tunnel.

###### Test your haproxy configuration on the server

To test your configuration, stop HAProxy using

sudo service haproxy stop

and run the following command to start HAProxy in debug mode:

sudo haproxy -d -f /etc/haproxy/haproxy.cfg

The “-d” flag is debug mode, and the “-f” flag is used to choose the config file. The typical output looks like:

Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result FAILED
Total: 3 (2 usable), will use epoll.
Using epoll() as the polling mechanism.

If you don’t get any errors, then your configuration is OK. Press ctrl+c to close this foreground version of HAProxy, and start the HAProxy service:

sudo service haproxy start

To test your client, you can use OpenSSL. The following command will connect to the server.

openssl s_client -connect ssh.example.com:443

You can also use your IP address. This will connect to HAProxy, and will be interpreted as ssh, if your configuration is correct. Once it reaches the ssh server, you’re good! You’ll see lots of stuff from OpenSSL, and finally a few seconds later the following message will appear if you’re using a Debian server:

SSH-2.0-OpenSSH_6.7p1 Debian-5+deb8u2

The message will change depending on your server’s linux distribution and OpenSSH server version. Once you see this message, this shows that you reached your ssh server successfully. You now have to setup a connection to your server’s tunnel.

## Connecting to the ssh server using the tunnel

You need socat to connect to the https tunnel, and then you ssh to that tunnel. The program, socat, can be downloaded either as a zip package (please google it and try it, if it works, great. I had problem with OpenSSL dlls back then when I first tried this). Or you can use Cygwin to get it. Cygwin is a set of linux programs compiled for Windows. Don’t get too confident in the installer and just download all its components or you’ll easily consume 30 GB of diskspace and consume 10 hours installing these components. Just download what you need.

In case you’re using a linux client, socat is a standard program in linux. Just install it with your default package manager, e.g. in Debian/Ubuntu

sudo apt-get install socat
###### Running the socat tunnel

Open your Windows command prompt as administrator (or linux terminal), and use the following command to connect to your server using socat

socat -d TCP-LISTEN:8888,fork,range=127.0.0.1/32 OPENSSL-CONNECT:ssh.example.com:443,verify=0

Here we use port 8888 as an intermediate local port on your client. Once this works with no errors, you’re good to use an ssh client.

Warning: A “-v” flag is verbose for socat. Don’t do this for serious connections, but only for tests, as it writes everything on the terminal where socat is running, and since Windows Command Prompt prints messages synchronously, it’ll slow down everything for you.

###### Connect with your ssh client

Assuming you’re using putty, this is how your client should look like

Or if you’re using a linux client, simply use this in your terminal

ssh 127.0.0.1 -p 443

And you should connect, and you’re done! Congratulations! You’re connected to your ssh server through https.

## What about the other ports, other than ssh?

Once you got ssh working, everything else is easy. You can use an ssh SOCKS proxy tunnel. Putty does this easily for you. All you have to do is configure your connection as in the picture:

This creates a SOCKS proxy. To use it, I provide the following example that I do on Thunderbird to secure my e-mail connections. You can do the same on any program you like, even on your web browser:

You can do the same on linux. Please google how to create an ssh tunnel on linux for SOCKS proxy.

## Conclusion

You’re connected! You can bypass any firewall you want just by being given access port 443. The only way to stop you is by cutting off your internet completely 🙂

# Start your computer remotely using Raspberry Pi

### Why do that?

I have my server at home, which contains multiple hard-drives with all my data on them in RAID configuration. I have my way to access this server remotely from any where in the world, but in order to access the server, it has to be turned on! So the problem is: How do I turn my server when I need it?

This whole thing took me like 4 hours of work. It turns out it’s much easier than it looks like.

### Why is keeping the server turned on all the time a bad idea?

Of course, a web-server can be left turned on all the time to be accessed from everywhere at any time, but a server that is used to store data… I don’t see why one would turn it on unless one needs something from it. In fact, I see the following disadvantages in keeping the server on all the time and benefits for being able to turn it on remotely:

1. High power consumption… although the server I use is low-power, but why use 150 W all the time with no real benefit?
2. Reduction of the server life-span, components like the processor has a mean life-time that will be consumed by continuous operation.
3. Fans wear out and become noisier when used for longer times.
4. What if the server froze? I should be able to restart it remotely.

### What do you need to pull this off?

• Raspberry Pi (1 or 2, doesn’t matter, but I’ll be discussing 2)

• 5 Volts Relay. I use a 4 Channel relay module. It costs like $7 on eBay or Amazon depending on how many channels you need. • Jumper cables (female-female specifically if you’re using Raspberry Pi + a similar Relay Module) to connect the Raspberry Pi to the Relay Module • More wires and connectors to connect the server to the Raspberry Pi cleanly, without having a permanent long cord permanently connected to the server. I used scrap Molex 4-pin connectors: I cut a similar connector in half and used one part as a permanent connector to the server, and the other part went to the wire that goes to the Relay Module. • Finally, you need some expertise in Linux and SSH access as the operating system I use on my Raspberry Pi is Raspbian. This I can’t teach here, unfortunately, as it’s an extensive topic. Please learn how to access Raspberry Pi using SSH and how to install Raspbian. There are tons of tutorials for that online on the Raspbian and Raspberry websites that teach it extensively. If you’re using Windows on your laptop/desktop to SSH to the Raspberry Pi, you can use Putty as an SSH client. Once you’re in the terminal of your Raspberry Pi, you’re ready to go! ### How control is done using Raspberry Pi: If you already know how to control Raspberry Pi 2 GPIO pins, you can skip this section. On Raspberry Pi 2, there is a set of 40 pins that contain 26 pins that are called GPIO (General Purpose Input/Output) pins. GPIO pins can be controlled from the operating system of Raspberry Pi. I use Raspbian as an operating system of my Raspberry Pi 2 and the Python scripting language. In Raspbian, python is pre-equipped with what’s necessary to start controlling GPIO pins very easily. Why Python? Because it’s super-easy and is very popular (it took me a few days to become very familiar with everything in that language… yes, it’s that simple). Feel free to use anything else you find convenient for you. However, I provide here only Python scripts. The following is a map of these pins: And the following is a video where I used them to control my 4-channel Relay Module: And following is the Python script I used to do that. Lines that start with a sharp (#) are comments: Note 1: Be aware that indentation matters in Python for each line (that’s how you identify scopes in Python). If you get an indent error when you run the script, that only means that the indentation of your script is not consistent. Read a little bit about indentation in Python if my wording for the issue isn’t clear. Note 2: You MUST run this as super-user. #!/usr/bin/python3 import RPi.GPIO as GPIO import time GPIO.setmode(GPIO.BCM) #The following is a function that inverts the current pin value and saves the new state def switchPortState(portMapElement): GPIO.output(portMapElement[0],not portMapElement[1]) pe = [portMapElement[0],not portMapElement[1]] return pe #There's no easy way to know the current binary state of a pin (on/off, or 1/0, or True/False), so I use this structure, which is a dictionary array that goes from up to the number of channels one wants to control (I used GPIO channels 2,3,5,6). The first element of each element is the GPIO port number, and the second element is the assumed initial condition. The latter will invert in each step as in the video portMap = {} portMap[0] = [2,False] portMap[1] = [3,False] portMap[2] = [5,False] portMap[3] = [6,False] for i in range(len(portMap)): GPIO.setup(portMap[i][0], GPIO.OUT) while True: for i in range(len(portMap)): portMap[i] = switchPortState(portMap[i]) time.sleep(0.5) If you access your Raspberry Pi using SSH, then you can use “nano” as an easy text editor to paste this script. Say if you wanna call the script file “script.py”, then: nano script.py will open a text editor where you can paste this script. After you’re done, press Ctrl+X to exit and choose to save your script. Then make this script executable (linux thing), using: chmod +x script.py then run the script using sudo ./script.py This will start the script and the leds will flash every half a second. Again, we’re using “sudo” because we can only control Raspberry Pi’s GPIO pins as super user. There are ways to avoid putting your password each time you wanna run this, which will be explained later. ### Get a grasp on the concept of turning the computer on/off: There are two ways to turn your computer on/off electronically without using the switch and without depending on the bios (LAN wake-up, etc…): 1. If you’re lucky, the power button’s wires will be exposed and you can immediately make a new connection branch in the middle and lead it outside the computer. Shorting the wires is equivalent to pressing the power button. 2. Use the power supply’s motherboard green wire. Shorting this wire to ground (to any black wire) will jump the computer and start it. The following is a random picture for a computer power supply. A clip is used to short green with ground. I used the first way of the two ways I mentioned. Here’s a video showing how it looks like: So shorting these two wires that come from the power button for some time (half a second) is what I did and that works as being equivalent to pressing the power button. After you manage how to connect these, then you can go to the next step. ### Connecting the power-wires to the Relay Module: After learning how to control the Relay Module, and learning how to take a branch from the computer case that if you would short the computer starts, the remaining part is to connect the power-wires, that you got from your computer power button or green+black power supply cords, and connect them to the Relay Module. The following video shows the concept and the result. Now you have the two terminals that if you short together the computer starts, let’s get into a little more details. Important: One important thing to keep in mind when doing the wire connection to the Relay Module, is that we need to connect them in a way that does not trigger the power switch if the Raspberry Pi is restarted. Therefore, choose the terminal connections to be disconnected by default, as the following picture shows: Connect your power-wires two terminals to any of the marked two in the picture. The way the Relay Module works is that when it’s turned off, it switches whether the middle terminal is connected to left or right. By default it’s connected to right, and that’s what we can see in the small schematic under the terminals. After doing the connections properly, now you can use the following script turn your computer on: #!/usr/bin/python3 import RPi.GPIO as GPIO import time import argparse #initialize GPIO pins GPIO.setmode(GPIO.BCM) #use command line arguments parser to decide whether switching should be long or short #The default port I use here is 6. You can change it to whatever you're using to control your computer. parser = argparse.ArgumentParser() parser.add_argument("-port", "--portnumber", dest = "port", default = "6", help="Port number on GPIO of Raspberry Pi") #This option can be either long or short. Short is for normal computer turning on and off, and long is for if the computer froze. parser.add_argument("-len","--len", dest = "length", default = "short" , help = "Length of the switching, long or short") args = parser.parse_args() #initialize the port that you'll use GPIO.setup(int(args.port), GPIO.OUT) #switch relay state, wait some time (long or short) then switch it back. This acts like pressing the switch button. GPIO.output(int(args.port),False) if args.length == "long": time.sleep(8) elif args.length == "short": time.sleep(0.5) else: print("Error: parameter -len can be only long or short") GPIO.output(int(args.port),True) Save this script as, say, “switch.py”, and make it executable as we did before: chmod +x switch.py Now you can test running this script, and this is supposed to start your computer! sudo ./switch.py ### Running the program from a web browser You could be already satisfied by accessing switching the computer remotely using ssh, but I made the process a little bit fancier using a php webpage. ##### CAVEAT: Here, I explain how you could get the job done to have a webpage that turns on/off your computer. I don’t focus on security. Be careful not to make your home/work network’s components accessible to the public. Please consult some expert to verify that what you’re doing is acceptable and does not create a security threat for others’ data. ###### Using superuser’s sudo without having to put the password everytime In order to access this from the web, you have to change pins states without having to enter the password. To do this, run the command sudo visudo This will open a text editor. If your username for Raspberry Pi’s linux is myuser, then add the following lines there in that file: www-data ALL=(myuser) NOPASSWD: ALL myuser ALL=(ALL) NOPASSWD: ALL This will allow the Apache user to execute the sudo command as you, and you have absolute super-user power. Now notice that this is not the best solution from a security point of view, but it just works. The best solution is to allow the user www-data to run a specific command as root. Just replace the last “ALL” of www-data with a comma separated list of the commands you wanna allow www-data to run, and replace “myuser” between the parenthesis with “root”. I recommend you do that after having succeeded, to minimize the possible mistakes you could do. This is a legitimate development technique, we start with something less perfect, test it, then perfect it one piece at a time. ###### Installing Apache web-server First, install the web-server on your Raspberry Pi. Do this by running this set of commands in your terminal: sudo apt-get install apache2 sudo apt-get install php5 sudo apt-get install libapache2-mod-php5 sudo a2enmod php5 sudo a2enmod alias sudo service apache2 restart I hope I haven’t forgotten any more necessary components, but there are, too, many tutorials out there and forums discussing how to start an apache webserver. If the apache installation is a success, then you could go to your web-browser and see whether it’s working. First, get the hostname of your Raspberry Pi by running this command in Raspberry Pi’s terminal hostname Let’s say your hostname is “myhostname”. Now go to your browser, and enter this address: http://myhostname/ If this gives you a webpage, then the web-server is working fine and you can proceed. Otherwise, if the browser gives an error, you have to debug your web-server and get it working. Please consult some tutorial online to help you run the apache server. ###### Creating the webpage: The default directory where the main webpage is stored in apache is either “/var/www/” or “/var/www/html/”. Check where the index.html that you saw is, and place the new php file there. Say that php file has the name “control.php”, and say the default directory is “/var/www/”. Then, go to that directory using cd /var/www/ Now create the new php page using the command sudo nano control.php And use the script <!DOCTYPE html> <html> <head> <title>Control page</title> </head> <body> <form action="#" method="post"> <center> <select name="switchlen"> <option value="short">Short</option> <option value="long">Long</option> </select> <input type="submit" value="Switch server" name="submit"> </center> </form> <?php if(isset($_POST['submit']))
{
if($_POST['switchlen'] == "long") { echo("This is long"); echo("<br>");$command = "sudo -u myuser sudo /usr/bin/python3 /home/myuser/switch.py -len long > debug.log 2>&1";
}
else if($_POST['switchlen'] == "short") { echo("This is short"); echo("<br>");$command = "sudo -u myuser sudo /usr/bin/python3 /home/myuser/switch.py -len short > debug.log 2>&1";
}
$output = shell_exec($command);
echo("Script return: ");

### What if corporate policy prevents me from storing literature on a 3rd party server?

Actually it’s not only corporate policy that prevents me from doing that. What prevents me, too, is that I’m not convinced that such a service is worth 10\$per month, especially that I own a linux server, for which I pay 30\$ per month with 1 TB diskspace. Does it make sense to pay 30\$for a full-featured linux server and 10\$ just for literature? Not at all!

### Proposing a solution to avoid 3rd party cloud intervention

The solution is very simple, and I could implement this solution due to the nice way Zotero stores data. Since Zotero stores data in a single, defined folder, all you have to do is synchronize this folder among the computers you have to use! A method to do this is by using a repository system, like GIT, which I find not convenient, since I manually have to commit, push and pull every change. So the better method I found is a synchronization system driven from my 30\\$ server, called Seafile.

Seafile is an opensource cloud system (similar to Dropbox) that can be run from your own server! It uses client-side encryption and is the safest I know and most recommended, so far. I have been using it for all my work and data, and I find it very convenient. So, all you have to do is synchronize your Zotero data folder among the PCs you want to use.

If you don’t have a server for yourself, simply use some 3rd party cloud, like Dropbox, which will anyway give you more diskspace than the standard Zotero cloud offers. However, you’re, again, limited by diskspace eventually. In case you need more diskspace, I really recommend that you rent (or probably buy for your home) your own linux server. You learn a lot, and you save a lot of money and you can use it for multiple purposes for yourself and your family.

Or… you could use servers from your own institution, which are normally offered at good universities (normally universities offer free diskspace for employees and students which is globally accessible or at least through a VPN service).

### Risks?

There’s some risk when doing this, but it’s not that bad for a reason. The main risk when using this method is that you could open the same Zotero database from different computers. I’m not sure whether just opening from different computers would create a problem, but I could almost be certain that if you make changes on different computers simultaneously you’ll induce a problem if your cloud tries to merge databases. However, it’s not that bad, because cloud systems usually create full history of your files with a revision for every change you make, meaning that if your database files (files with extention *.sqlite in Zotero) get corrupted, you can always roll back to a previous version and have zero losses.

### Conclusion

You can create a very good and reliable scientific literature database system using Zotero, and a cloud. This is a perfect solution for personal literature. However, I still don’t have a solution for groups that won’t involve storing data on a 3rd party server.

PS: It could be possible to use your own server to synchronize Zotero database as if you would be synchronizing with the official Zotero server. However, this would involve recompiling the source code of Zotero with your server address, which, I think, is a huge burden. This depends on whether your group wants to be commited to such a solution.

# What is exactly non-deterministic in our universe?

#### What is determinism?

Determinism is the concept that the physical world that we live in is wound like a clock. The concept says that if we would know every law that governs the universe, and we have the computational power to compute those laws; then, we may know the future with 100% certainty.

This universe is non-deterministic. We know this, now, with great certainty, and with many experiments and many successful models that have shown, so far, that this is true. In fact, Einstein had fought his last 20 years, before his death, to disprove this fact, and he failed.

Will someone else disprove it? A Nobel prize awaits this Genius, if he could do it!

#### Main question of the article

What if an observer lives outside our universe and monitors the universe from the outside, and knows the internal parameters that control our universe. Will he be able to determine the future infinitely precisely?

It’s a very complicated question in fact to think about. However, in order to answer the question, we have to understand what is non-deterministic in our universe.

#### How can we tackle this question?

In order to understand the answer to this question, we have to understand what it is that we have to predict. And understand why uncertainty shows up in the first place.

#### Where does uncertainty come from?

Uncertainty comes mainly from the fact that we deal with a world governed by some classical parameters. For example, we deal with energy and position. Those parameters, if known very well, describe our systems very accurately.

In other words, in the simplest form, if we know the positions and energies of a set of particles; we may, as well, predict the future and dynamics of the system very accurately.

#### What is it that we’re uncertain about in our universe?

Here comes the problem. When we go to the microscopic level of particles, we find that the macroscopic (large scale) description of systems is very different. In quantum mechanics (QM), systems are defined by the so called “wave packets”. They’re not anymore “objects” like we thought of them before.

#### The problem is the transition

Our particles are described by wave packets in the microscopic level. But in our world, we don’t deal with wave packets. We deal with particles. We deal with well defined energies and well defined positions. Therefore, we need a transformation that will take our wave packet from its wavy picture, to a picture that is compatible with our classical observations. This transformation is called in QM the expectation value. When performing this transition, there is no way we can do it without uncertainty. For example, in the particle seen in the picture, we can never, ever, define a single point that characterizes the position of the particle. There’s no position! The particle is smeared over a volume of space. Therefore, the transformation from the QM picture, with wavy properties, to the classical picture, is the cause of uncertainties.

#### Conclusion

The reason of non-determinism in this world is not that we don’t know the characteristics of a particle in its wave nature. The main problem causing uncertainty to appear is that the description of particles in their microscopic form is always coherent with uncertainties, due to the incompatibility of the world views when taking the step from the microscopic world to the classical world.

Answering the main question: Will someone outside the universe, who knows the parameters of those wave packets, be able to predict the future with 100% certainty? The answer is NO. Because the fact that a transformation from the wavy form to the classical form contains uncertainties is not related to our physical world. It’s rather related to the mathematical nature of this transformation, which inherently will cause uncertainties to appear, independent of the knowledge of the entity performing this transformation.

# Physics models nature, it doesn’t find its laws

#### One huge misconception of physics is that it seeks laws that are presumed to exist in it

No! Physics does not presume that nature has laws and tries to find them. Physics simply studies a phenomenon, and then tries to create a law that is accurate enough to reproduce the phenomenon, or at least to predict its existence in the future.

#### Do those sound not different from each other?

They are very different! In the assumption that nature contains laws that we try to find, we assume that the laws that we find in nature are 100% accurate. Not only this, but we also assume that the laws of physics represent the system in its roots. Both assumptions are not true!

#### Why is this wrong?

Because the laws of physics that we create depend solely on our observations of those phenomena. With no doubt, our observations are simply a projection of reality and not reality.

#### Has there been incidents that show that this is the case?

Yes! Along the history of physics, we have always seen that the laws we discover are simply a superset of older laws. For example, take a look at Newtonian Mechanics (NM) and Quantum Mechanics (QM). In NM, we created a physical quantity called “Energy”, and this energy played like the very main role in everything in classical physics, starting from simple motion, Lagrangian and Hamiltonian mechanics and ending with fundamental thermodynamics laws. However, in QM, we found tha energy, that we thought is fundamental, is not fundamental anymore! Not only that, but we also found that positions are not fundamental, and those characteristics that we used to use in classical mechanics and were absolute, do not work in QM anymore, not absolutely. Consequently, uncertainty principles showed up for position and energy.