Mirror of the Rel4tion website/wiki source, view at <http://rel4tion.org>

[[ 🗃 ^yEzqv rel4tion-wiki ]] :: [📥 Inbox] [📤 Outbox] [🐤 Followers] [🤝 Collaborators] [🛠 Commits]

Clone

HTTPS: git clone https://vervis.peers.community/repos/yEzqv

SSH: git clone USERNAME@vervis.peers.community:yEzqv

Branches

Tags

master :: projects / razom /

tool-reuse.mdwn

Tool Reuse

2014-03-26

During the last few days I’ve been examining existing RDF technologies, and I’m realizing it’s a waste of time and effort to reinvent so much, when all the tools I need already exist. The fact I’m using C++ doesn’t help me, because many things don’t have C++ bindings and it makes it look as if tools don’t exist.

Here is more or less the stack for basic triplestore interaction:

	Desktop Application
		Object Mapper		Web UI and Web Pages
		Python Binding		Web Query Server
					Model API						Database CLI
								Database API
								Database Server
								Storage API
								Model

This is an example of what can be done with RDF:

			Kranti
			SuRF		Pubby
			librdf		Redstore
					Redland						Redland Utilities
								[none]
								[none]
								Redland Hash Storage
								RDF

And this is what I intended to do:

			Kranti
			Alder			[?]
			[none]			[?]
				[statement API]				[?]
								[none]
								[none]
								[?]
								Smaoin

But it means tons of reinvention: The whole stack needs to be rewritten, even though the model itself it nearly identical! I believe Smaoin can be implemented via RDF by making some simple rules like these:

  1. Use only datatypes which exist in Smaoin
  2. Don’t use things Smaoin explicitly avoids, like language tags for strings
  3. Use Smaoin modeling conventions and data schema

I’m going to build a long series of semantic applications now, based on all my ontologies and ideas. In general this is the infrastructure I’ll need:

Also, for desktop applications I’ll need:

However I can start with a local private database for each application and develop the rest later.

Here’s the same diagram, now with the new setup:

			Kranti
			SuRF			Web UI and Web Pages
			librdf			Redstore/Fuseki
					Redland						Database CLI
								Database API
								Database Server
								Storage API
								Smaoin

One thing is missing though: The semantic data widgets! The ones I wanted to write based on Alder, I’ll need to write based on SuRF or just reuse existing ones if any exist. I doubt that any exist for GTK+ but I need to check.

Problem: My gtkmm knowledge becomes useless here, because I need to develop a widget for use in Python. I have the following options:

  1. Learn PyGObject and write the widget in Python using SuRF
  2. Use gtkmm and Redland, and see if I can write a Python wrapper
  3. Learn GObject and GTK+, and use them with Redland and then enjoy GTK’s introspection to use in Python

In order to decide, I need to know exactly how SuRF works and how/whether it handles data updates and how/whether it does any concurrent work with the database which Redland does not. If Redland can be used easily enough for the data access, then option 3 becomes very reasonable, with the hope creating a GTK+ widget is not too painful… IDEA: If it is, I can start by making it a Python widget and just consider sometime later to port it to C GTK+ for cross-language availability.

Things to do, think, plan, learn and write:

Here’s what I want to start with:

Problem: These things take time, and as a result I keep losing the long-term sight and motivation. The basic sustainability issues are by far more important than convenient software, so I can’t plan too much and dive into lots of plans and code. IDEA: Start with tiny steps in an iterative process, i.e. revise and improve all the time. Learn the minimum I need, so first results come fast.

Status:

For large graphs, obviously it’s not convenient to use separate commands. I want to use a syntax like Turtle, but there’s a problem: Since my system of mapping namespace-identifier pairs to uids doesn’t exist in RDF, I cannot use them. I have to use directly the UUIDs, which of course make my files unreadable. Unless…

IDEA: Choose, manage and use namespace-identifier pairs, and simply find-and-replace them with UUIDs before inserting into the triplestore! This can possibly be done with sed, assuming sed is smart enough. Otherwise a whole new program is needed, possibly in Perl which will make it easier.

Later I will define the syntax of the file which defines the namespace/identifier mapping - can just be a turtle file, or yaml. But before that, I’m going to the Smaoin plans and diagrams, to formalize them into syntax I can then feed into a triplestore. I intend to use the conversion idea and feed the result into rdfproc, thus creating an initial datastore I can build and extend further.

Okay, I started writing smaoin.ttl.

And I’m basically done with it. But there’s work to do: I cannot just insert the file into rdfproc because it uses namespace-label pairs, which don’t work like in RDF. Redland doesn’t know how to handle them. Before the Turtle file is inserted into anything, all references must be replaced with the UUIDs they represent. It also means Turtle output from the triplestore will be very ugly, because it will contain the plain meaningless UUIDs.

In order to have a P2P semantic application framework, what exactly do I need? Hmmm first there’s the whole core semantic part: the P2P can come later. It’s quite a separate thing. So, what remains? Well, since rdfproc is in place and apps can use Redland or its various bindings, all I need is to… what? I can insert information manually, but instead I can modify apps to work with the semantic store. On the other hand it means I need to implement some sync, so many apps can use the same database. Anyway, first I need to write the ontologies and core definitions, etc. And I need translation software for my special Turtle files!

I’m still not done with Smaoin: I have all the property properties (e.g. transitive, function, reverse) to add and probably other things - I need to go over the Smaoin plans and add everything. I’ll probably use the smaoin namespace for them all because their interdepend on each other anyway, and a separation like rdf and rdfs will not be clear to a user. Or to me.

2014-04-09

I’m starting to go over the Smaoin documents. I also wrote about property requirements on paper.

2014-04-25

All 4 core ttl files now have the latest syntax, but I need to complete the descriptions. Then according to the task list above, I’ll proceed to writing files for the main Kiwi ontologies. In case it becomes boring, I can in parallel start studying flex (e.g. rewrite libKort using it) or prepare the C and C++ project skeletons.

Also, I’m changing ready ‘ttl’ files to the new extension ‘idan’. This language is going to be Idan.

2014-05-06

I’m done for now with the core Smaoin Idan files. I’d like to choose the next step with the help of the plans above.

IDEA: Make a Redland-based C++ library written for Smaoin (e.g. support of label resolution, statement identifiers, etc.). Another thought: Maybe before I dive into implementing Idan, it may be a good idea to start from a C/C++ skeleton which builds, and then start with libKort files as a storage backend because libKort will be easy to implement. I can also implement it using Bison or Flex/Quex to get experience before I use them for Idan.

2014-05-08

I want to start with the skeletons. But before that I want to improve my documentation infrastructure, and for that I decided to start using ikiwiki. I already have Redmine running with an integrated wiki, but Redmine is there mostly for experimentation, and less for actual data input which will later be a hell to export to somewhere else.

After I have it running, I’ll hopefully be able to add pages related to the C/C++ skeleton plans easily, and add links from everything to everything. Then I can have two things open in parallel at all times: The source Markdown pages, and the linked pretty HTML for browsing conveniently. A simple browser will probably be enough, e.g. maybe even Lynx if I manage to make it easy for me to use. If not, any small browser will work. Midori, for example, would be great for this.

ikiwiki will need to store several things:

There are so many ways to place those things. I’m not sure what’s best. Let’s see if anyone on the internet has good ideas…

Nothing special found. Here’s the thing: I would use /var/www or /var/wiki or /srv for this, and give proper permissions, but the root partition doesn’t have much space anyway. Instead I can create a new ‘ikiwiki’ user and let it handle things under the big /home. The only problem then is how Gitolite uses the repo.

Wait… it’s even worse. Much much worse. Making ikiwiki and gitolite work together means ikiwiki needs an SSH key and other complexities. But there are several guides on the web, I think I can handle it. I just need to document this… first here, and then in a separate guide.

First, here’s a command for ‘gitolite’ user I found on the web:

sudo adduser
–system
–shell /bin/bash
–gecos ‘git version control’
–group
–disabled-password
–home /home/gitolite gitolite

I’m going to modify it to work for ikiwiki. Here’s another command, from another website:

sudo adduser
–system
–shell /bin/bash
–gecos ‘git SCM user’
–group
–disabled-password
–home /home/git
git

Almost the same. Let’s see man adduser first.

The --system options means the new user will be a system user. The default shell is /bin/false, but here we change it to Bash using the --shell option. By default the new system user is placed in the nogroup group, but passing --group as we do here puts it instead in a new group with the same ID as the user.

The --disabled-password means logins are possible, but not using a password, e.g. SSH may be possible. --gecos seems to be some kind of description, not explained in the manual page. I’ll just give it some value. And last is --home, which sets the new home directory to create.

Here’s my ikiwiki version of this command:

sudo adduser
–system
–shell /bin/bash
–gecos ‘ikiwiki instance user’
–group
–disabled-password
–home /home/ikiwiki
ikiwiki

In order to allow us to have a private-source wiki or to enable the web interface and manual changes to srcdir, we need to give ‘ikiwiki’ a pair of SSH keys, so it is able to access the repository. The key must be passwordless, because it will be used from a script. Here’s a command I found for gitolite:

cd ~/.ssh ssh-keygen -t rsa -f gitolite

Let’s see man ssh-keygen-f specifies the name of the keyfile, which I believe is id by default. Let’s drop that and see what happens. Then we have -t, which is the key type. RSA is probably the best. Also, do I actually need the cd there? I’m not sure. Actually, RSA is the default! So it’s probably enough to run the command without any arguments at all. For older systems specifying RSA may make a difference. But since it’s Debian 7 here and RSA seems to be the latest default… just run without options.

One more thing, do this as the ikiwiki user. How exactly does the su - user thing work? Oh, I see. The dash is like the --login option. So if we have many things to do as this user, it’s easier to become it and then there’s no need to do repeated sudo. Here’s my version of the command:

su - ikiwiki ssh-keygen

Now let’s create a new git repository in Gitolite. What name should it have? rdd-wiki is already is use. Actually, since I’m also going to drop other things there like the tutorials… let’s just call it wiki. With my Gitolite tutorial, this should be quite easy. In the “2_config” page, do the “c) Add a new repository” instructions.

I’m also giving read access to git-daemon and adding a description, so it gets displayed on gitweb. This way it’s clear that all the info is publicly visibly. In the future, if private pages are needed, global read access may be removed. Commit, push, and there you have a new repository. Now two more steps with Gitolite: Add a new user with its SSH key, and give it RW access (no need for RW+). Once again, it’s a standard procedure I already documented.

Time to create the wiki. Make sure you are ‘ikiwiki’ and use the standard procedure for now, as a beginner:

su - ikiwiki ikiwiki –setup /etc/ikiwiki/auto.setup

My settings as passed to the last command:

Good. Make sure the shown user/email of ‘ikiwiki’ is okay. If not change according to the instructions shown. Now, if a message says adding your wiki to the wikilist failed, change user to root and append the line “ikiwiki” (without quotes) at the end of /etc/ikiwiki/wikilist.

Now, let’s test the webserver. We need to be able to see the pages and to edit them using the web interface. After that you can disable the web interface editing feature if you wish. It’s a good idea to make sure it works, so if you want to add it back sometime later, you’ll know everything is set up correctly.

First, the web server much have read access to the HTML pages. For this to work, make the group of the ‘public_html’ folder ‘www-data’ instead of ‘ikiwiki’. Actually, I’m not sure it will persist when the HTML pages are regenerated. If not, just do one of these:

Wait a minute… right now my ikiwiki files are all globally readable, like any user’s home folder is. So for now it should just work. Later we’ll see how to protect it. I can check /home/git to see if the chmodding there works even when new files are created.

Now, I think we may need to tell the server where the files are. There are some settings in the wiki setup file itself which may need to be changed. Here’s a portion of the setup file:

wi ki na me : Pa rt ag er
# co nt ac t em ai l fo r wi ki
ad mi ne ma il : ik iw ik i@ fr 33 do m
# us er s wh o ar e wi ki a dm in s
ad mi nu se r:
- fr 33 do ml ov er
# us er s wh o ar e ba nn ed f ro m th e wi ki
ba nn ed _u se rs : []
# wh er e th e so ur ce o f th e wi ki i s lo ca te d
sr cd ir : /h om e/ ik iw ik i/ Pa rt ag er
# wh er e to b ui ld t he w ik i
de st di r: / ho me /i ki wi ki /p ub li c_ ht ml /P ar ta ge r
# ba se u rl t o th e wi ki
ur l: h tt p: // fr 33 do m/ ~i ki wi ki /P ar ta ge r
# ur l to t he i ki wi ki .c gi
cg iu rl : ht tp :/ /f r3 3d om /~ ik iw ik i/ Pa rt ag er /i ki wi ki .c gi
# fi le na me o f cg i wr ap pe r to g en er at e
cg i_ wr ap pe r: / ho me /i ki wi ki /p ub li c_ ht ml /P ar ta ge r/ ik iw ik i. cg i
# mo de f or c gi _w ra pp er ( ca n sa fe ly b e ma de s ui d)
cg i_ wr ap pe rm od e: 0 67 55

The URL may be incorrect. But I’ll update it in a moment - let’s just make the web server work. I’m still not exactly sure what happens to the CGI if the we cancel the web editing - does the webserver need to be reconfigured to use the static HTML pages?

Let’s try to copy the gitweb configuration and adapt it for ikiwiki somehow. Who knows, maybe it will work.

Oh, I see. It normally browses regular HTML pages, and uses the CGI only for editing. Or at least it seems like that. Let’s start by just pointing the webserver to the HTML. Changing the document root will be enough… and yes, it just works. Now, how do I make the CGI work? I edited the setup file to disable OpenID and changed the URLs, but I still need to tell the server about the CGI. Trying to edit the lighttpd config…

No. Not working. I have no idea why, at the moment. Maybe file permissions or something.

Listen. I don’t have time right now to start playing with this. I don’t even understand the lighttpd configuration. Let’s just disable the websetup plugin and start changing things only via git. Much easier, faster and safer. I also want to replace the CSS, unless using links or w3m is good enough with the colors and everything. We’ll see.

Next steps:

  1. Proceed with ikiwiki’s instructions for using a remote git repo (so I can use gitolite)
  2. See what’s up with the timezone… how do I make it use “last changed” dates in English? Maybe just change the ikiwiki user’s locale? The default is Hebrew, so maybe just changing to English will be enough
  3. Start reading and doing changes via git. First task is to “port” the previous partager.i2p index page

ikiwiki guide now says to add configuration to ikiwiki user. Skipping this for now. It now says to change the ‘origin’. Sure, this has to be done. Changing it:

/home/ikiwiki/Partager/.git/config

	[remote "origin"]
		url = ssh://git@localhost:8950/wiki.git
		fetch = +refs/heads/*:refs/remotes/origin/*

Good. Now a step not mentioned there: In order to create the gitolite repo as needed, it seems to need to be cloned from the existing one. Again, use my tutorial to push the Partager.git repo into gitolite’s wiki.git. Then, Partager.git is not needed anymore. Still not deleting it, however.

Now let’s write a new index file to make sure everything works.

No, it doesn’t. I need to add a hook to Gitolite or something, because my changes don’t get pushed to the srcdir unless I git-pull them to it manually. Even worse, the pull doesn’t cause rebuild of the HTML pages. Trying to copy hook…

After a push I got this: “remote: cannot write to /home/ikiwiki/Partager/.ikiwiki/commitlock: Permission denied”

I can either give ‘git’ access to that file, or e.g. just add it to the ‘ikiwiki’ group, or I can try the pingee plugin as the ikiwiki tip says, although that seems to require CGI… hmmm… let’s read about pingee again and see if permissions are just an easier nicer solution.

In order to make ‘git’ in the ‘ikiwiki’ group:

adduser git ikiwiki

Doesn’t help, still permission denied.

Trying again after changing Partager srcdir permissions to allow writes from group…

Failed with much worse error:

remote: Host key verification failed. remote: fatal: The remote end hung up unexpectedly remote: ‘git pull –prune origin’ failed: at /usr/share/perl5/IkiWiki/Plugin/git.pm line 218.

Found another problem: srcdir wasn’t up-to-date because I didn’t pull manually after each failed attempt. Trying to push a change again…

Again an error:

remote: Host key verification failed. remote: fatal: The remote end hung up unexpectedly remote: ‘git pull –prune origin’ failed: at /usr/share/perl5/IkiWiki/Plugin/git.pm line 218. remote: failed to write /home/ikiwiki/public_html/Partager/index.html.ikiwiki-new: Permission denied

Let’s give the destdir permissions too…

Again error:

remote: Host key verification failed. remote: fatal: The remote end hung up unexpectedly remote: ‘git pull –prune origin’ failed: at /usr/share/perl5/IkiWiki/Plugin/git.pm line 218.

Maybe I forgot to update srcdir this time? Trying again… No. The exact same error. That line runs ‘git’ in a child process. IDEA: Maybe I need to do that thing I skipped with the SSH config? ׁׁHey, looks like the HTML got updated! Does pulling manualy do that? Probably, because the last change I did is not visible there. Let’s just try the SSH fix first. Another idea: Instead of making all these changes, just give srcdir file-path based access to the repo in gitolite, without the localhost ssh! Yeah, sounds not very safe, because the R/W/+ access probably won’t be enforced… just try the SSH config.

Another thing, maybe this setup causes the ‘git’ user to SSH to itself from the srcdir repo, in the git-pull, so it may be causing the trouble at well - maybe login as ‘git’ and ssh to itself with the ikiwiki ssh key to authorize? No, sounds strange. Another idea: Make the ‘git’ user keep the wiki. A bit dirty but will probably solve the problem, assuming gitolite is okay with being bypassed like this.

UPDATE UPDATE UPDATE

I made it all work on jfm laptop. Now repeating the same here from scratch…

Steps:

I’m done with the basic setup. It just works now. I can disable for now the web editing and preferences, and start planning the C/C++ skeleton in ikiwiki to get used to it before I do full migration. I need to get familiar with subpages, page links, templates and so on, before I do a migration. Let’s start :-)

Actually, just before I start it - write a guide to summarize everything I did, and make a commit of rdd-wiki saying “document the setup process of ikiwiki with gitolite”. I also want to check if having sub-domains on the clearnet requires a DNS server here on the laptop (it’s okay if yes, I can start with using things like fr33.indy/git etc.)

Done. Guide is written. Let’s start working on the skeleton. I’m beginning to work with the new wiki.

2014-05-11

I moved some things to the new wiki, but I haven’t started working on the actual skeletons yet. Starting now. Also, use the ‘orphans’ plugin to make a simple orphan-list page.

Okay, started skeleton and added orphans. Proceed with skeletons: Collect all the materials and move there to the wiki, and also explain my plan after examining the existing examples I have: e.g. use libKort as a basis for a C++ library skeleton. Also, explain the requirements (doxygen, etc.) and that the skeleton must build as is. The skeleton itself should be in a separate repo! But of course provide a link to it.

2014-05-12

I moved all the skeleton material. I want to start the skeleton repo, but the current skeleton folder (moved to be under git-repos) has many many TODOs which I want to move to be handled by ikiwiki. So I’m working on a TODO summary page like in ikiwiki’s website.

start collecting the tasks from the skeleton files I can just bring those files here, so that the TODOs in them link to the todo page and I can still have the tasks embedded as comments in the files :-)

then the skeleton repo can have clean stable working files.

2014-05-15

I started libskeleton based on libKort. Now I need to:

  1. Add gettext support - old partager can help
  2. Change the name everywhere from libKort to forms of libSkeleton

Then proceed with the help of what I wrote on paper and all the tasks and notes I collected under the skeleton page in the wiki. Add features in parallel to skeleton and to libskeleton. Also in parallel to this, import the rest of rdd-wiki into ikiwiki (start from the things linked from the components page).

I also have Skapa to work on, and the whole thing of source file template templates. I need to import all my things from the CherryTree file, remember? Yeah, there’s a lot of importing to do. IDEA: When I’m done with an initial stable skeleton package I can use to start my next task - possibly the C++-redland-for-smaoin - do a full import of all content from everywhere into the wiki.

Vim and Perl progress: I’m learning Vim. For now it’s just for the git commits, but I’d like to also know how to do more serious editing with it, e.g. for server config files and wiki pages and source code. I’m learning Perl and intending to use it for all the template related things, like makeclass and Skapa. I can start by writing makeclass in Perl - should make it as trivial as a Hello World program, much shorter than the C++ original version. Later I can also use Perl for handling data I/O when working with the Razom triplestore. I’m still not sure how it’s going to work, but I can use the existing Perl interfaces to RDF as inspiration. When the times comes, of course.

Also, write BNF for Idan, so I have exact rules to work with. I can also - or instead - write a PEG grammar. Check it out, see how PEG works. May be a good idea to write a PEG grammar too. Anyway I need grammar for reference even before I do any actual implementation.

I think this is a good time to convert those “diagrams” at the top of this file into an actual component architecture. I hopefully know now much more than before and can make some outline of the things. Which tools I’ll use, how they work together and so on. I’m probably dropping the RDF reuse because it doesn’t have statement identifiers and the labeling system, which would result with ugly hacks and make me regret the reuse plan. So considering all this, make a plan and a component architecture diagram. Even partial is okay - write what I can, and put it on the wiki to serve as a real-time updating overall plan of the project.

2014-05-22

I’m trying to setup an Infinote server using this:

http://softwarebakery.com/infinote-server-with-pam

Slightly modifying it to match my needs…

Modified command I will now run to make the new user:

[[!format sh """ adduser
–system
–shell /bin/false
–gecos ‘infinote server user’
–group
–disabled-password
–home /home/infinote
infinote """]]

Oh, wait a minute. Creating a certificate requires having the correct hostname, but I’m still not sure which hostname I want to have. I’ll wait a bit until I backup when I get the new harddrive. I’ll need to back up the whole /etc (etckeeper can help) and all the things in /var/lib like I2P tunnel keys and Tor hidden service keys. Also, the whole content of my home folder, the git user’s homefolder, ikiwiki user’s home folder… sounds like this backup is best done automatically.

2014-05-25

Okay, I did the server setup. This may help too in the future: http://gobby.0x539.de/trac/wiki/Infinote/Infinoted.

Now I guess I need to test it - whenever I have anything to share there. Also I opened the port (6523) in the router.

2014-05-27

Infinote should work now, I created one user but haven’t tested it yet. There may be a certificate problem though - if Monkeysphere cannot handle it, maybe it’s a good reason to create my own CA like A/I do, and sign all my certificates with it.

I also started again to import things from rdd-wiki.

[See repo JSON]