PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 30th of May, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:02:10] <daidoji> anyone still here?
[00:26:35] <daidoji> hmmm, pdb it is then...
[01:25:28] <dgarstang> Need some help with mongo. I've enabled SSL and now two shards aren't working anymore... :(
[01:40:24] <abstrakt> what's the best way to find out how much time a given query takes
[01:40:42] <abstrakt> when I have e.g. about 40,000 records, mongo wants to page them by default inside of the mongo console
[01:41:26] <abstrakt> basically I want to know which part of the 3 seconds it takes to deliver my JSON through my API is taken by my application layer and which part is the query
[01:41:40] <abstrakt> e.g. how long does the query take vs how long does it take my application layer to render those results as JSON
[01:48:15] <abstrakt> I suspect my application layer, but I'd like to confirm that, just not sure how to tell how long the query took
[02:19:53] <wojcikstefan> hey, do you know what's the acceptable replication lag? I'm trying to figure out when a member changes its state from SECONDARY to RECOVERY (i.e. how big the replication lag has to be for that to happen).
[03:02:14] <joannac_> abstrakt: .explain()
[03:03:37] <joannac_> wbx___: um, shouldn't happen unless there was a rollback, or it's in resync
[03:10:43] <abstrakt> joannac, ahh thanks
[03:17:21] <geardev> http://www.reddit.com/r/mongodb/comments/26ji9u/how_to_get_a_handle_on_an_open_connection_without/
[03:17:41] <geardev> Last comment: "any idea why db would be undefined in this case?"
[03:42:48] <geardev> please?
[03:42:56] <geardev> "any idea why db would be undefined in this case?"
[03:43:00] <geardev> http://www.reddit.com/r/mongodb/comments/26ji9u/how_to_get_a_handle_on_an_open_connection_without/
[03:52:18] <geardev> liajslj :(
[03:52:24] <geardev> nobody ever answers
[03:52:27] <geardev> in this channel
[03:52:40] <geardev> guess i'll go cry then read the docs a little more
[04:54:50] <voxadam> I'm attempting to install mongodb on a Debian sid box but it fails to start. Does anyone have any thoughts? http://pastebin.com/WN7yqExJ
[05:27:23] <abstrakt> hmm, so I just made a super simple application with express, and I'm trying to deliver approx 50,000 records but I get no data back if I set limit to 100000
[05:27:40] <abstrakt> is there a limit to limit?
[05:27:48] <abstrakt> using the nodejs native driver
[07:00:42] <noob123> Hi! Anyone alive?
[07:01:33] <snixor> I'm trying to "repair" (shrink) my mongob databases, and when i lunch --repair with --repairPath the process ends in miliseconds
[07:01:36] <snixor> nothing chnages
[07:01:46] <snixor> Any advice?
[07:02:35] <Ponyo> Is there a processor more effective at the the workload mongo presents than the Intel Xeon?
[08:04:36] <amagee> hey i just upgraded my ubuntu vps and now when i start mongodb it seems to terminate straight away.. logs here http://pastebin.com/cKzpntRq .. any ideas?
[11:02:14] <Shapeshifter> Hi. I'm writing an application which does large scale graph computations. I need to store the resulting data somehow and I'm thinking I could use mongodb, but I'm not quite sure how to design the documents. Some facts: 1) Every node in the graph will carry some data which needs to be persisted. 2) The data of many nodes might be identical. 3) nodes may be added and removed from the graph, after which the computation will be re-run, ...
[11:02:20] <Shapeshifter> ... which will cause a relatively small number of nodes to have changed data, which again needs to be persisted.
[11:03:42] <Shapeshifter> I'm thinking I could have a data collection which stores data, identified by some hash. Every node could reference one of these data items (so that nodes with equal data all point to the same data document). That part is relatively clear
[11:04:52] <Shapeshifter> I'm not so sure about the versioning. Basically I would need to be able to query for "all data documents which represent data of a graph of revision XYZ", but many nodes may not have updated their data for that revision, so the data of an older revision would need to be used.
[11:05:46] <Shapeshifter> So for every node, I would need to query "is there a data document for revision n?" and if not, query "is there a data document for revision n-1?", and again n-2, n-3 etc. until I find something, but this sounds slow.
[11:11:50] <rspijker> Shapeshifter: you could just use $lte (less than or equal) in your query…
[11:12:35] <Shapeshifter> rspijker: but I would always need the newest revision. So if a node has 3 different data, one at rev1, one at rev21 and one at rev25, I need the data from rev25 but not the others
[11:13:17] <rspijker> well, you could do a sort and limit, to get just the latest result
[11:13:53] <Shapeshifter> rspijker: is sorting expensive? I might have some 2-10 million data documents which I need (plus a few million more of older revisions which I wouldn't need in this query)
[11:14:09] <rspijker> it can be, but you can add an index for that specific field
[11:14:14] <Shapeshifter> I see.
[11:14:15] <Shapeshifter> thanks
[11:14:17] <rspijker> which makes it fairly cheap
[11:14:47] <rspijker> think about your data structure well though….
[11:14:57] <Shapeshifter> I think maybe I could keep a "fringe", basically a map of node id to "latest data id"
[11:15:07] <rspijker> if you always want latest, it might be worth getting a wrapper document that you keep updated with latest
[11:15:13] <Shapeshifter> yep
[11:16:09] <rspijker> either way, mongo is very flexible, so you can always decide to do things slightly differently later on
[11:17:38] <kali> with this kind of problem, anyway, you'll have to be creative and flexible with any database
[11:18:08] <kali> have you considered graph oriented solutions, like neo4j ?
[11:18:42] <Shapeshifter> kali: yes, I have looked into neo4j but it looked scary.
[11:19:09] <kali> graphs are scary :)
[11:19:48] <rspijker> graphs are the best
[11:20:27] <kali> mongodb does not address graph specifically, but the computing difficulties will lurk and bite you for sure. neo4j may look scary because the hard part shows earlier... i don't know :)
[11:21:22] <Shapeshifter> the thing is that I don't really need to store the graph. The graph is actually source code, an AST, plus some extra edges, but I can always recreate the graph from the source code and the graph computation framework itself doesn't provide any form of persistence, so basically I only need to store data, but not the graph structure itself, which is present in the code.
[11:21:41] <kali> ha
[11:22:41] <kali> a very treeish DAG, so
[11:22:53] <kali> nothing like a social graph
[11:23:02] <kali> that's different.
[11:23:11] <rspijker> a ‘treeish DAG’?
[11:23:25] <rspijker> a DAG is, by definition, a tree, isn’t it?
[11:23:29] <kali> rspijker: nope
[11:23:30] <Shapeshifter> The different revisions of the graph are revisions of a git repo. I'm actually thinking, I might store the data right there in the git repository to get free versioning (i.e. create a branch which contains the original code plus one data file for each source code file. Something like that. But I doubt it would work nicely.
[11:23:40] <kali> rspijker: a node in a DAG can have two (or more) parents
[11:23:59] <Shapeshifter> rspijker: you can have something like >--< in a DAG, which is not a tree
[11:24:00] <kali> rspijker: http://en.wikipedia.org/wiki/Directed_acyclic_graph
[11:24:08] <rspijker> kali: the defintiion from tree that I recall from my days of graph theory, is that it is acyclic
[11:24:10] <Shapeshifter> well, it looks like a tree
[11:24:25] <kali> rspijker: yeah. indeed
[11:24:28] <kali> my mistake
[11:24:30] <rspijker> hmmm, apparently it’s an undirected simple graph
[11:24:38] <kali> well no.
[11:24:39] <rspijker> definitions vary, I believe...
[11:24:40] <kali> i'm right
[11:24:46] <kali> you can still have two parents
[11:24:51] <rspijker> sure
[11:24:54] <kali> because it's directed
[11:25:04] <rspijker> the questino is, can a directed graph be a tree
[11:25:15] <kali> a tree is a DAG, yes
[11:25:37] <rspijker> no…
[11:25:41] <rspijker> a tree is not directed
[11:26:01] <kali> a tree is directed: the edges have a parent and child
[11:26:05] <kali> there are directed
[11:26:34] <rspijker> a tree is undirected
[11:27:34] <kali> mmmmm
[11:27:35] <kali> ok.
[11:27:37] <Shapeshifter> it might depend on whether talking about trees in set theory, graph theory or as a particular implementation
[11:27:38] <Ravenheart> hey guys
[11:27:43] <Ravenheart> big huge problem
[11:27:44] <kali> http://en.wikipedia.org/wiki/Tree_(graph_theory) in graph theory it is
[11:27:47] <Ravenheart> that super annoying
[11:27:53] <rspijker> in the graph theoretical sense, a tree is just a graph without cycles
[11:28:01] <Ravenheart> i want to run my mongod in auth mode
[11:28:12] <rspijker> you can even root the tree at any vertex, so direction makes no sense
[11:28:14] <Ravenheart> but the user i make then suddenly doesn't have access to create new collections
[11:28:22] <kali> rspijker: computer trees are the "rooted trees" of graph theory, and these are directed :)
[11:28:24] <rspijker> in the computer science sense of the word, this will likely be slightly different
[11:28:29] <kali> yeah
[11:28:31] <rspijker> kali: yes :)
[11:28:51] <kali> well, we are talking about an AST, so i would expect *that* to be directed :)
[11:29:00] <Ravenheart> i've added readWrite, dbAdmin and userAdmin as roles
[11:29:10] <Ravenheart> and STILL it insists that my user does not have access
[11:29:54] <rspijker> kali: fair enough :)
[11:30:21] <rspijker> Ravenheart: does it have readWrite on the correct DB? or only on admin?
[11:30:30] <Ravenheart> on the correct db
[11:30:42] <rspijker> which version of mongod is this?
[11:30:45] <Shapeshifter> kali: interestingly, the direction doesn't really matter even for an AST. e.g. in my representation, the arrows point from children to parents, but it might just as well be the other way around
[11:30:57] <Ravenheart> 2.6.0
[11:31:09] <Shapeshifter> kali: or edges could be bidirectional
[11:31:30] <rspijker> Ravenheart: sorry, can’t help you then… They changed role mgmt in 2.6 and I haven’t worked with it yet
[11:31:49] <kali> Shapeshifter: yeah, but there is a privileged node, some kind of root, right ?
[11:32:09] <kali> Shapeshifter: the fact that you draw the arrow in one sense or the other are, i aggree, quite irrelevant
[11:32:15] <rspijker> Shapeshifter: that’s probab ly because an AST is in fact a tree, as in, no cycles
[11:32:25] <Shapeshifter> yes
[11:33:13] <Ravenheart> god this is so irritating
[11:33:19] <Ravenheart> i've setup a whole migrtion process
[11:33:24] <Ravenheart> done all the hard work
[11:33:34] <Ravenheart> and the only let down is crap user management
[11:33:45] <Ravenheart> how hard can it be to add one user that has full rights to database X
[11:34:20] <Ravenheart> the only thing left is to create a user in the admin database
[11:34:29] <Ravenheart> and open up the whole world to the whole mongod
[11:35:13] <rspijker> Ravenheart: according to docs, readWrite should do the trick...
[11:36:02] <rspijker> how did you create the user?
[11:36:15] <Ravenheart> both with Robomongo and by hand
[11:36:36] <rspijker> which command?
[11:39:31] <Ravenheart> db.addUser
[11:39:48] <rspijker> that’s a 2.4 command… it’s deprecated in 2.6
[11:40:02] <Ravenheart> the createUser doesn't work
[11:40:06] <Ravenheart> undefined
[11:40:19] <rspijker> is your shell 2.6 as well?
[11:40:35] <Ravenheart> well it should be
[11:40:38] <Ravenheart> its an IDE
[11:40:41] <Ravenheart> Robomongo
[11:41:16] <rspijker> you said you created them ‘by hand’ as well...
[11:42:33] <rspijker> also: https://github.com/paralect/robomongo/issues/520
[11:43:45] <Ravenheart> " Error: couldn't add user: User and role management commands require auth data to have schema version 3 but found 1 at src/mongo/shell/db.js:1004"
[11:43:50] <Ravenheart> yes i opened the shell inside the IDE
[11:43:53] <Ravenheart> and wrote the command myself
[11:44:28] <rspijker> yeah… so that’s still on v2.4
[11:44:47] <Ravenheart> thats from the linux box
[11:45:03] <Ravenheart> i ssh into the server and used its shell
[11:45:14] <rspijker> yeah, you now have an old-style user in there (due to the addUser) and therefore it doesn’t play nicely with the 2.6 format
[11:45:54] <rspijker> remove the old user and you should be able to use the createUser command
[11:54:05] <Ravenheart> this is weird
[11:54:11] <Ravenheart> dropped the database
[11:54:19] <Ravenheart> used the server's shell
[11:54:25] <Ravenheart> and it still says the same thing
[11:54:34] <Ravenheart> schema version 3 but found 1
[11:54:39] <Ravenheart> what am i doing wrong
[12:03:48] <arussel> I need to update a collection using mongo shell script. What is the proper way to do it ? var cursor = db.foo.find() while(cursor.hasNext()){ var el = cursor.next(); db.foo.update({_id: el._id}) seems to create problem
[12:27:38] <rspijker> arussel: just do a .update ?
[12:28:17] <rspijker> without the cursor that is...
[12:28:22] <rspijker> you can use multi:true
[12:32:04] <arussel> don't I have to use snapshot() ? I do see the same document more than once
[12:32:23] <arussel> I need the document to know how to update it
[12:35:13] <arussel> yeah, using snapshot solves the issue :-)
[12:35:24] <kali> arussel: yeah _id, or any index which make sense
[12:35:35] <kali> arussel: i mean, $snapshot is just a sort by _id
[12:40:56] <arussel> if I have {"_id":1, a: "a"} does db.foo.update({_id: 1}, {$set: {a: "a"}}) takes a write lock ?
[12:41:27] <kali> i think it does, but not for long :)
[12:44:46] <arussel> so it might be better to do a read before an update from the application instead of just throwing update at mongo hoping it can manages better
[12:57:39] <kali> arussel: i don't think you'll gain much. the lock will be held only the time needed to do the comparison, so it should be very fast. if you perform a client side find, you expose yourself to a race condition
[12:58:22] <kali> arussel: don't overthing too much about the write lock. it is only a problem in pathological cases
[15:54:01] <doug_> Help? :)
[15:55:35] <saml> i help doug_
[15:56:16] <saml> db.help.insert({user:'doug_', question: ????})
[15:56:22] <doug_> saml: Hi.. I have SSL questions...
[15:56:48] <saml> mongodb is good at SSL termination
[15:57:04] <ehershey> I'm good at SSL termination
[15:57:12] <doug_> saml: well that's good. So, uhm... if I enable SSL, but skip the CA verification for now...
[15:57:23] <saml> ah
[15:57:29] <saml> what's your driver? pymongo?
[15:57:45] <doug_> Well, actually I'd rather now skip it. I'm using the ruby one
[15:58:13] <doug_> can't seem to reallt find much in the way of docs for the ruby driver with ssl
[15:58:36] <saml> http://docs.mongodb.org/manual/reference/configuration-options/#net.ssl.weakCertificateValidation
[15:59:01] <doug_> I was following this http://docs.mongodb.org/manual/tutorial/configure-ssl/#mongo-shell-ssl-connect but then I found http://docs.mongodb.org/master/tutorial/configure-x509/ ... Which one do I want?
[15:59:14] <saml> wait.. that looks like server to server
[15:59:22] <saml> you prolly need to consult ruby driver doc
[15:59:30] <doug_> saml: which I can't find
[16:00:21] <doug_> hang on... server to server isn't encrypted
[16:00:24] <doug_> ?
[16:01:54] <saml> i don't know. i don't use ssl at all
[16:02:04] <doug_> *sigh*
[16:02:23] <doug_> Those two docs seem to say the same thing, but in confusingly similar ways
[16:03:10] <doug_> the ruby driver seems to basicall ignore the ssl_cert option when connecting.
[16:04:12] <doug_> if I enable ssl with CA verification, I can connect locally with the mongo command by supplying the PEM file. However, the ruby driver when given the ssl_cert fails
[16:04:57] <doug_> I can disable CA verificiation, but I don't know if that's good enough. No pem file is required, so how the heck does encryption even work? What's it encrypting against?
[16:06:10] <saml> yah https://github.com/mongodb/mongo-ruby-driver/blob/master/lib/mongo/client.rb it seems to use :database and :write only
[16:06:21] <saml> why use SSL? just curious
[16:06:29] <saml> putting mongodb on public network?
[16:07:06] <doug_> saml: it's a requirement here. Yes, going onto EC2
[16:07:29] <doug_> the docs for the ruby driver are terrible. I cna't find mention of SSL at all https://github.com/mongodb/mongo-ruby-driver/wiki
[16:13:44] <doug_> This SUCKS. This will work... mongo -ssl -sslPEMKeyFile /etc/ssl/mongodb-qa.pem but when I pass ssl_cert of /etc/ssl/mongodb-qa.pem to the ruby driver it effin fails.
[16:24:35] <saml> i'd delete and give up
[16:25:45] <saml> could be easier to use their virtual private cloud
[16:26:54] <doug_> saml: not an option
[17:07:13] <doug_> if I enable SSL in mongo... that encrypts between client and mongos.... but what about within the cluster? Is that then encrypted too?
[17:14:54] <doug_> I want to cry. :(
[17:15:21] <Derick> doug_: it encrypted between all nodes
[17:16:15] <doug_> Argh! "ERROR: The server certificate does not match the host name 10.137.26.212"
[17:16:26] <doug_> I'm getting that when I try and configure the router
[17:16:36] <Derick> doug_: you should use DNS names
[17:17:00] <doug_> Derick: I thought I was. Unfortunately I'm setting this up with a forked third party chef cookbook
[17:18:57] <cheeser> there are too many chefs ... *puts on sunglasses* ... in this kitchen. *YYYYYYYYYYEEEEEEEEEEEEEEEAAAAAAAAAAAAAHHHHHHHHHHHHHH*
[17:19:06] <doug_> sigh
[17:20:14] <doug_> yep, nodes are coming back with FQDN's... INFO: CONFIG SERVERS = [node[mongo-cfg01.qa.slicetest.com], node[mongo-cfg02.qa.slicetest.com], node[mongo-cfg03.qa.slicetest.com]]
[17:21:06] <doug_> maybe it's because the FQDN has no PTR record?
[17:25:43] <doug_> nope, the cookbook actually uses the IP. FFS
[18:00:27] <MacWinner> hi, how would you check to see if a field is $ne: 'string1' AND $ne: 'string2' ?
[18:45:34] <cheeser> MacWinner: with a $and ?
[18:46:00] <MacWinner> cheeser, yep :) .. i actually just used $nin
[18:46:10] <MacWinner> is there a performance consideration there?
[18:47:55] <cheeser> i'm not sure offhand
[19:21:04] <adrian_berg> http://www.reddit.com/r/mongodb/comments/26ji9u/how_to_get_a_handle_on_an_open_connection_without/
[19:21:11] <adrian_berg> the last comment is where my question is
[19:26:35] <saml> don't use Q
[19:29:48] <adrian_berg> saml: I'm just using what the person suggested to get it working
[19:59:04] <danijel> hi guys, im stuck with getting locations with $geometry and $near and maxDistance, here is example http://pastebin.com/P4F1kvc1
[19:59:10] <danijel> can you please help
[19:59:44] <danijel> how can i get locations 20 km radius
[19:59:49] <danijel> thanks
[20:07:06] <adrian_berg> saml: I figured it out, it was an old compiled file in the same directory
[20:07:25] <adrian_berg> Okay, I'm switching over to bluebird now to see if I can't get a more elegant solution
[20:07:36] <saml> oh no
[20:07:43] <saml> why use those if you're using coffee?
[20:07:57] <saml> i thought coffee solved those through careful language design
[20:08:37] <adrian_berg> huh?
[20:08:49] <adrian_berg> you still need to handle callbacks somehow
[20:09:00] <adrian_berg> promises are the best abstraction we have in js/coffeescript land
[20:09:13] <saml> bluebird and Q are awkward libraries to control data flow in javascript
[20:09:27] <adrian_berg> how else are you going to do it?
[20:09:43] <adrian_berg> i would be interested, i don't like having to use promises for everything, but it's all i'm aware of
[20:09:45] <saml> by not using node.js
[20:09:47] <adrian_berg> :)
[20:10:10] <adrian_berg> streams in clojure are nice, but the streams in node.js aren't the same unfortunately
[20:10:35] <adrian_berg> anyway, i'm using node, because that's what i'll have to be using
[20:17:20] <adrian_berg> https://pastee.org/puy6m
[20:17:27] <adrian_berg> That works just fine
[20:17:32] <adrian_berg> This isn't
[20:17:34] <adrian_berg> https://pastee.org/t96p2
[20:17:55] <adrian_berg> any ideas?
[20:18:49] <adrian_berg> oops that's wrong
[20:18:58] <adrian_berg> db.coffee is supposed to be something else
[20:19:03] <adrian_berg> let me repase
[20:20:33] <adrian_berg> the stuff that works: https://pastee.org/wq6e4 the bluebird paste is still failing though, and that's what i'm trying to get help with
[20:23:21] <blizzow> How do I up the maximum number of connections my mongodb allows?
[20:35:20] <saml> blizzow, how many connections does your mongodb currently allow?
[20:35:51] <blizzow> about 20,000
[20:38:28] <Derick> that's the hardcoded maximum
[20:39:30] <blizzow> Derick, so maxconns is the way to set this in /etc/mongodb.conf?
[20:40:06] <Derick> no, 20.000 is the maximum that MongoDB allows
[20:40:17] <Derick> you can not set it higher than thta
[20:45:08] <thevdude> much web scale
[20:47:07] <thevdude> I have a collection with entries very much like this: {_id: 1, team: "Team Awesome", abbr: "AWSM", players: ["AWESOMEPLAYER", "AWSMPSSM", "KICKAWES"]}, how can I replace one of the items in the "players" array without knowing which specific item it is with another given string?
[21:00:00] <thevdude> I figure it out, have to use the $ positional operator
[21:02:43] <blizzow> Derick: even if I set ulimit -n 64000 and put maxConns = 20000 in my mongodb.conf, db.serverStatus(); still returns a maximum of 16000 connections. :(
[21:20:21] <saml> thevdude, how did you do?
[21:52:11] <thevdude> saml: db.teams.update{{players:"AWSMPSSM"}, {$set: {"players.$":"NewName"}})
[21:55:54] <saml> thevdude, so many dollars
[22:02:57] <thdbased> Question about DB structure. Having posts documents in my DB, is it best to have the comments on the posts in the same document or in a separate collection?
[22:04:30] <cheeser> separate collection
[22:04:52] <thdbased> ok thx, any comment son why?
[22:05:20] <thdbased> I know there is no straight answer on this type of question but...
[22:05:49] <cheeser> comments can grow arbitarily large and you could find yourself up against the 16M doc size limit pretty quickly.
[22:06:08] <thdbased> Correct didn't think about that one
[22:06:53] <cheeser> having worked on a mongo based CMS, that's an easy one for me to answer. ;)
[22:07:06] <thdbased> and then so the Post document has an array containing all the comment ID's?
[22:08:17] <Derick> cheeser: not on my blog though, never more than 10 comments :-)
[22:08:25] <Derick> so the answer is really again: it depends :-)
[22:09:49] <thdbased> It has to be separate because it will be high volume so 16Mb is to small for sure
[22:10:02] <cheeser> comments refer back to the post
[22:10:12] <cheeser> for the same reasons.
[22:10:33] <thdbased> cool thx
[22:10:34] <cheeser> even if you don't get near the 16M limit, doc growth can cause movement on disk
[22:31:42] <whomp> every hour, i need to import about 30 million rows to a table from a csv file. each row of the file has a latitude and longitude value. how can i import them as one geojson object?
[22:33:04] <Derick> whomp: 30 million points in one document?
[22:33:15] <whomp> Derick, yes
[22:33:33] <Derick> you can use a geometrycollection in 2.6 for that
[22:34:57] <whomp> Derick, i want each point to be a separate document because it corresponds to a value that we want... right?
[22:35:32] <Derick> whomp: I think so, I don't know your data set
[22:35:38] <Derick> what does each row represent?
[22:36:12] <whomp> the row has four fields: lat, lon, value, and time. the objects are weather forecats: x temperature at y time in some place
[22:36:33] <Derick> okay, then yes, one document per row I'd say
[22:36:37] <whomp> i want to run queries on the data afterwards to find forecasts for the area around someone
[22:36:49] <Derick> so one geojson object (let's call it lat) with the lon, lat pairs in it
[22:37:06] <Derick> and then two other fields for time, and temp
[22:37:09] <whomp> ok, so back to the first question. how do i import csv quickly with two of the values used to create a geojson object?
[22:38:06] <whomp> or a point object if i'm confused. whatever a simple coordinate pair should be
[22:45:33] <Derick> a point should be a geojson object
[22:47:44] <Derick> as for importing, you probably should write a script in your favourite language
[23:59:51] <geardev> given these three files, how would you actually display the scores collection on the screen? Right now it's returning [object Object] https://pastee.org/ftkb