Mirror of the Rel4tion website/wiki source, view at <http://rel4tion.org>

[[ 🗃 ^yEzqv rel4tion-wiki ]] :: [📥 Inbox] [📤 Outbox] [🐤 Followers] [🤝 Collaborators] [🛠 Commits]

Clone

HTTPS: git clone https://vervis.peers.community/repos/yEzqv

SSH: git clone USERNAME@vervis.peers.community:yEzqv

Branches

Tags

master :: projects / rel4tion /

story.mdwn

This page tells the story of Partager. How, when and why it started. It also tells parts of my own story. Actually, it may end up being more “my story” than “Partager’s story”. We’ll see.

For a short overview of when and how the project started, jump to the last section.

Note: The story mentions several proprietary software programs, some of them in possibly positive light. However this is just the story, the way it happened, nothing more. I don’t suggest or recommend this software or any other proprietary software, in any way. Today I use a 100% free operating system with no proprietary packages or kernel blobs or anything like that.

Ancient Times

The first computer I had access to ran Windows 3.11 and had a drive for those large floppy disks which preceded the 3.5 inch ones still used as the “Save” button icon in many programs. It took a while until I found one of them and finally the drive B mystery was solved: Now it made sense why A and C were in use but B seemed to be forgotten. There was no internet connection, so the world was small.

I wasn’t using the computer more than once in 1-2 weeks then. Most of my hobby activity, other than reading and listening to the radio, was with my best friends, notebook and pencil. I wrote stories, drew comics and made board games. None of that was impressive or useful, probably, but I had fun.

In 2001, when people were switching from Windows 98 to the then fresh new Windows XP, the old PC was replaced with a newer one running Windows 98. I discovered more modern video games and the internet. But I don’t remember being excited about the internet or using it much - I just played online games, when the ones coming from the CDs became boring or repetitive.

In 2003 something happened. Maybe nothing would change if it hadn’t, but maybe that event was what led me to discover programming and be where I am now. Anyway, it is definitely how I found out that programming exists.

Even before that, I wondered how computers work. How a piece of metal can deliver such complicated images, make sounds, respond to mouse movements and print documents. But now I had a clue - just a tiny one I didn’t think had anything to do with how “real” programs are written, but without knowing I was getting some basic initial skills already. It was game scripting.

I think it was a birthday present. A game CD, “SimCity 3000 Unlimited”. The “Unlimited” part is important if I recall correctly, because it means the game comes with several extensions, among them the Scenario Creator. It allows you to create a story around the game, generate events and give the players tasks and missions, and then punish or award them depending on how well they play. The thing is, scenarions are created using a dedicated scripting environment.

Either because I wasn’t using the computer very much, or because notebook and pencil were still my best friends (or both), I started writing scenario scripts in a notebook. After I wrote some in the computer, I felt I got the idea and I could now plan more complicated scripts which required some plans and notes, before I actually write the script. When I wrote those scripts I felt something I never felt before - I was taking some tools and building big things from them. Things I couldn’t build with pencil and paper. Those script lines were powerful, and I enjoyed the mind-challenging process of assembling them into blocks that made sense and did what I wanted them to do. I was programming.

Middle Ages

At some point between 2004 and 2006 I decided I want to make my own games. I found a freeware program called GameMaker, which was very friendly to non-programmers and provided a convenient GUI for creating small simple 2D games. I read the manual first, and then started playing with it.

Over the few years of using it I created many small programs. A 2D ghost shooter, rope physics simulation, procedural graphics (randomly-growing trees), some visual effects, a top-down shooter/exploration game “framework” I never finished, a SimCity-inspired isometric map editor and generator, and more. My last project with GameMaker was a Real Time Strategy game based on Warcraft 2. I read about pathfinding algorithms and added my own improvements, wrote the enemy AI, read about OpenGL for graphical effects like “fog of war” and map discovery, planned the underlying information and command system and did all the performance tweaks I could.

But nothing helped, the game ran too slow with GameMaker. I was already using GameMaker’s scripting language and using all the tools I could besides external native code, but the game was just too heavy. All the layers of abstraction and all the relatively heavy computation I did (stencil buffer and A* pathfinding were the main bottlenecks) made the game engine too slowly. It started with less than 30 frames per second and went down below 10 when I started adding game units to the map.

I knew many people were writing plugins for their games with other tools. Real programming languages. OpenGL calls. Things I couldn’t do. I decided it was time I stop using the convenience of GameMaker and learned a real programming language, so that I can make the game perform much better and add more features.

Renaissance

I think it was in 2007 or 2008.

I started reading about programming languages. What they are, how they work. I had no idea which languages existed other than HTML and C, and even then C was just a name. It was time to discover new tools and expand to new horizons.

I was looking for a fast and powerful language. I had no idea these advantages come at a high price (complexity and low level), so to me more powerful and more efficient meant the better. I read about C and C++, and took C++ because it supported object oriented programming (OOP), which sounded somewhat like the object model I had in GameMaker. I didn’t know C had structures, because in GameMaker’s scripting language the only way I knew to define structures was grouping variables and/or arrays under a name, specified in a comment. So I was sure all imperative languages are like that - just variables and arrays. This made C++’s classes and objects a big advantage, relatively to what I thought C was.

https://www.cplusplus.com and https://www.learncpp.com helped me a lot. I read whole tutorials thoroughly several times. I didn’t know C++ is a relatively complicated language with many small details, because I didn’t know other languages to compare with.

I also read about the OpenGL C interface, matrix multiplication and other tools I felt I needed. As I learned more and more, I realized how complicated it would be to write the full RTS game in C++ using OpenGL. I decided I need a framework which would more or less simulate what I had in GameMaker and provide a high- level abstraction for things like 2D graphics, sound, animation, physics simulation and artificial intelligence. I found many tools which already implemented these things, but I felt I can’t just use them without understanding how they work. I wanted to know exactly how the stack works from bottom to top, even if it means writing my own tools for programming experience and practice.

So I started.

Enlightment

I called the project GSTC, Game and Simulation Tool Collection. It had several components: Data structures, math and graphics were the core, and other things were either higher level wrappers or less critical at that point, e.g. sound. I focused on wrapping OpenGL calls with C++ classes and 2D operations, abstracting away details of the unused third dimension.

In 2010 if I recall correctly, I bought a laptop. The only one I ever bought, and it serves me today. My brother introduced me to GNU/Linux, and a short while after I got that laptop - or was it immediately, I don’t remember - I removed the Window$ installation it came with and installed Arch Linux instead. Today I know it was probably a very bad idea for a beginner, but it actually allowed me to see what bleeding edge software looks like and how some things can be controlled only from the terminal - a concept which was totally new to me.

Sometimes the X server stopped working after an update, in which case I needed help to revive it. Eventually I decided to switch to something more stable, but still with recent software: Fedora. I think the latest was Fedora 12 or 13 when I switched. I still have the installation CDs. I didn’t know back then, but both Arch and Fedora came with proprietary binary blobs for common devices which didn’t have free drivers. I also thought all those Nvidia drivers and codecs I installed manually were needed. Only much later I discovered it wasn’t the case.

GSTC was the first time I was working on something that big and complicated, and involving so much research in parallel to actual work. I had to find a way to organize all the information. I had so many lists and plans and tasks, a simple list in a text file wasn’t good enough. I started writing structured information as tab-indented trees in plain text files, and use some special characters to mark the status of tree nodes. I had several files and items were moving between then as things changed and features were added.

For a while, it worked. But those trees were becoming huge, much too big to manage with a text editor, even a powerful one like Gedit. I started looking for task management tools and hierarchical note applications. I made a list of them and started trying them one by one. Each tool provided features I didn’t have before, but something was always missing. I think Zim was the tool I used for the longest time, but eventually it wasn’t what I really needed. As time passed and I was working less on GSTC and more on the task applications, I began to lose motivation. Why reinvent the wheel? I felt there was no good reason.

I opened a new page in my Zim notebook and started making a list of ideas for new projects I could start. After considering all the options, I decided to make a platform-adventure game. I had an idea for a story - not very original - and I wanted to create a cool game engine with good physics simulation and automation for dialogs. This time I was going to use existing tools and avoid duplication.

After Zim I tried one last task management tool, Getting Things GNOME (GTG). I liked its task hierarchy and tagging, but it had two weaknesses: I failed to give tasks and tags more than one parent, and when my task tree contained several hundreds of tasks, GTG became extremely slow and unusable. It took very long time to just load the application. Of course, maybe it got fixed by now in later versions.

Then something happened. I don’t remember what exactly. I remember my life was unstable and I was asking myself existential questions, but I don’t remember how it affected my work. Anyway, at some point, after a long while of not touching my work, I decided to stop working on the game and start something new. I had to solve the task management problems.

Sylva was born.

Revolution

What I was missing the most in all the tools I tried was the ability to treat the task dependency graph as a Direct Acyclic Graph (DAG), not as a tree. And without any tree-like abstractions. Sylva was supposed to be task management application that would solve the problem.

The tools I chose were C++ and gtkmm. I knew GTG was written in Python, and hoped any performance problems is caused wouldn’t exist if I use C++, which was fast and the only language I knew. I learned gtkmm quickly and started coding the data model and the GUI. I created derived TreeModel and TreeView classes and dived into GTK+ source code, trying to understand how the TreeModel drag-n-drop implementation works, so I could make it behave the way I want. I used a complicated graph node class to represent a DAG vertex, which took me very long time to write. I didn’t know anything about graph theory (they didn’t teach that in high school), so I just used my own graph tools and assumed it would work.

But it wasn’t so easy. While the GUI part was successful, the Node class and its integration with the Task and Tag classes became more and more complicated. And the usage of C++ templates and specializations only made (much) more complicated. I knew DAGs were general purpose, and I should deal with them in a general purpose way. But the more I tried, I realized something was wrong there.

Actually, maybe I was wrong. Maybe I was fine. The problem was that every time a dependency was created between tasks, I decided my code should remove all the existing dependencies now implied by transitivity, to keep the graph clean and without duplication. That operation had a high cost - scanning the whole graph. Today I know BFS and DFS are standard operations, and on any human-created task graph they wouldn’t take much time anyway, but I was very worried because I didn’t want to introduce inefficiency at such a low level of the application.

I decided to split Sylva into several components, so the DAG part could be a separate library for general-purpose use. The components were:

The git repository containing these components can be found on this website’s git server. I still plan to use the code, especially the gtkmm specialized widgets, although I may decide to rewrite them in a higher-level language. SGP now has newer versions in its own repository.

Modern Era

At some point, because of all the complications with the Arbre::Node class, I began to wonder whether it was maybe a better idea to have a more general- purpose model. Something that could model anything, not just a task DAG. What if I wanted to have a pair of tasks depending on each other? A DAG can’t do that.

Most of all, the problem was that while developing those components, I wanted to be able to define task graphs in some temporary way, using text files I can later process and convert to other formats. After examining several text-based task management tools I didn’t like, I started looking for a general-purpose information language. I found the Resource Description Framework (RDF). It almost sounded like a miracle. A tool which can model anything! Describe anything! Define anything! I didn’t really know how it was based on math, set theory, graphs, etc., but I knew it could describe any data model I want. Including DAGs.

My first idea was to use RDF directly, but I quickly realized I don’t understand how it works. Documents were very confusing and technical. I failed to grasp it. Just like with GSTC, I decided I’d create my own tool.

Using a hierarchical note application called CherryTree, I started planning a data language which I called Idan. My girlfriend’s name. I was too confident, feeling it was going to be the perfect language. I didn’t realize not understanding the concepts of RDF was an obstacle. But I read more and studied harder, and eventually I understood. By that time there was already a huge plan around Idan. A language, an API to manipulate data and a plan to implement the my task application using it.

However, there was a problem: Once I understood RDF, I began to realize Idan was too weak, and much less powerful than RDF. It had to had more features and more syntax constructs - but that would just make it an RDF clone. I had to change something in the plan, or cancel it. By that time, I already had several software repositories and API plans. There isn’t much useful code, but I uploaded it anyway to Partager’s git server.

After I stopped feeling lost and was able to make new plans with a clear mind, in 2012-2013 I started migrating from CherryTree to a folder-hierarchy-based structure with plain text files. I called it “the wiki”, although it was just files and folders. The idea was to drop the old limited Idan plans and create a better language, or reuse RDF. Before I decide, I must examine carefully.

I read about RDF’s development process. You’d have to be either an organization or an invited expert in order to develop it. There was no way I could ever be there without at least having professional knowledge and knowing what “semantic web” is. I decided, just like with GSTC and Sylva, that I want to do it myself, bottom-up. The whole process. Design a tool the way I believe it should be.

I also read about other languages, like Gellish, and as I read I found weaknesses and flaws in RDF which I wanted to fix. They mostly came from the origin of RDF and its design as a web language, and not as a general purpose tool, but some were just technical detail. Resource URIs usually used website URLs with centrally-issued domains, there weren’t statement identifiers without reification and ontologies were developed by experts in working groups, so the average person could never just open an editor and define models on the fly. RDF seemed to be, and still seems to be, mostly a tool for servers to work with more sophisticated information. Desktop/mobile users hardly have access to this power.

To be honest, I also wanted to have my own project. Design things from scratch. I didn’t like the idea of using RDF as a solution to a problem so close to my “heart” by now, data modeling, especially because a group of selfish-interest companies was working on it and I couldn’t be involved, not even just for learning and developing my skills. I was just a kid with a laptop, after all.

While using Fedora, I realized most of the distributions have proprietary kernel blobs or optionally install proprietary drivers/codecs for you. Only very few distributions are really 100% free software. I decided to try running one of them on my laptop and see if it can work without any blobs.

It was short after Debian 7 was released. I chose Debian 7 stable with the main repository only. “contrib” and "nonfree are disabled. I was lucky: everything works without proprietary components. Wifi, bluetooth, graphics. Since then I’ve been using Debian.

For long time I used GNOME 3. But I began to dislike all the movement to the over-simplified mobile-oriented UI of GNOME Shell at the cost of less effectiveness for desktop usage (anyway I personally began to feel the desktop metaphore was still better for desktop usage). I decided I must switch to something else, or remain with GNOME 3.4 forever. Also, what seemed to be GNOME’s drifting away from the principles of software freedom (like the introduction of officially-unofficial Github mirrors) and the feeling the development was done too much by companies and with too little community involvement made me worry.

So I switched to XFCE. This is what I’m using, at the time of writing. And it’s great! The whole desktop appearance is very configurable (while GNOME shell was hardly configurable at all unless you develop extensions, maybe except for the desktop background) and provides the old but excellent desktop metaphore. I feel at home much more than I used to. And as a big bonus, RAM usage is considerably lower now. I could easily get all the RAM used with GNOME 3. Now I normally have less than 50% usage, which allows to run heavy things in the background and add more server daemons without worrying about it. It’s light and fun.

I do still use several GNOME applications, especially ones which didn’t get their UI over-simplified in GNOME 3.4 or don’t have good alternatives in XFCE. Most notably I use Gedit and Nautilus.

Some of you may think it’s unfair to judge GNOME 3.4 because it was young, and I should try newer versions. Well, I did. I tried Fedora 19 for a while with all the new applications and improved shell interface. I also tried Fedora 20 beta when it was released. I still prefer XFCE :-)

Contemporary Era

I think the most important change which led to the birth of the Partager project in 2013-2014 was… my spiritual growth. I began to be more aware to issues of freedom, privacy and basic rights. I started asking why money has to stand between people and make them go against each other. And why do greedy companies develop software standards and everyone else, including Free software, uses them.

Partager isn’t just a new data model - it has a spirit. It’s free as in freedom, developed without any money involved, any external influence. Its sole purpose is to provide a data management tool and framework allowing people everywhere and anytime to access and share information without asking permission from selfish-interest organizations.

Science and progress should lead to all people’s happiness.

For several months, the text-file-based wiki I had sat in a git repository called rdd-wiki, available on the git server. At some point, as part of my growing interest in decentralization and home/community servers, I launched a web server available through I2P and later also Tor, serving a simple HTML file describing my project. I quickly added a git server and Gitweb support, and for the first time Partager was online on an independent server.

Later I decided rdd-wiki wasn’t good enough because there was no generation of interlinked pages from the plain text. I read a lot about wikis and project software, and chose ikiwiki to manage the website. The single simple webpage was replaced with a wiki, and I quickly organized it and imported most of rdd-wiki into it, and added a lot of new content.

Then I found OpenNIC, the democratic commmunity DNS. It means you can have a domain name without paying to some selfish-interest greedy company or using a centralized domain name system. Instead it’s a community and everyone has a voice. You donate to keep the servers running, not to make someone a millionaire. I got an OpenNIC domain name for Partager and opened the web server to the open internet (clearnet) in addition to I2P and Tor.

Partager now has a clear vision and plans, a roadmap, a list of components and layers, stated goals and many many new designs being made all the time. Unfortunately a lot of my time is spent on a job-just-for-money and software engineering studies, but I try hard to find time to clear my mind, focus and work on Partager.

That’s all, that’s the story. Probably not very inspiring, maybe a bit pathetic in some points, but everyone and everything has a history.

And a future.

Sincerely,
[[fr33domlover]]

[See repo JSON]