PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 9th of May, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:02:06] <iwaffles> Hmm sort of
[00:24:35] <iwaffles> Hmm, maybe I need to do an $and => {"$or" => {a} {B}}
[05:06:46] <krz> can i use $match with between?
[05:06:54] <krz> or would i need to use $gt and $lt
[05:07:11] <krz> something like $match : { score : { $gt : 70, $lte : 90 } }
[05:39:21] <tomlikestorock> given a document schema of trait_text, person_name, trait_usage_count, I'd like to get each distinct trait_text with the person_name who has the max(trait_usage_count). I'm trying to do this using $first with aggregation, and am failing. What should I be doing?
[07:42:35] <[AD]Turbo> hola
[07:46:26] <pratz> Is it fine if I use gridfs for less that 16MB files ?
[07:46:45] <pratz> I have files up to 10-12MB but in future they might grow
[07:58:41] <Nodex> pratz : yes, why wouldn't it be?
[08:23:05] <_{Dark}_> ehlo
[08:24:44] <_{Dark}_> someone have experience to restore a mongo database from a single server (one mongodb instance, no replica, no shard) into a replicaset?
[08:26:04] <Baribal> _{Dark}_, not really, but that shouldn't be an iddue besides taking quite a bit of time if the DB is big.
[08:26:28] <_{Dark}_> no the DB is little
[08:26:59] <_{Dark}_> Baribal, I'm just checking which is the best way to do this
[08:27:37] <_{Dark}_> just copy the files and perform some action into replica set?
[08:28:11] <_{Dark}_> because I read an article that saying to don't use db.clone() in a replica set environment
[08:28:34] <_{Dark}_> Baribal, just to give you the whole picture:
[08:28:57] <_{Dark}_> I have on server Mongo "A" with one db "test"
[08:29:09] <_{Dark}_> I need to copy this db "test"
[08:29:16] <_{Dark}_> in the new infrastructure
[08:29:39] <_{Dark}_> that is composed by 3 server
[08:30:00] <_{Dark}_> B, C, D
[08:31:35] <_{Dark}_> that are configured in this way: http://www.catify.com/2010/11/22/mongodb-infrastructure-tests-part-ii-produktion-ready-sharding/
[08:32:29] <Nodex> you said "No shard", that speaks of shards
[08:34:05] <_{Dark}_> Hello Nodex
[08:34:30] <Nodex> do you just want a copy of your data on each server?
[08:35:27] <_{Dark}_> I would like to copy the data that I have in the old single machine to the new infrastructure (designed like the link that i post)
[08:35:42] <_{Dark}_> and of course be sure that this data are correctly replicated
[08:36:08] <_{Dark}_> after this probably I will decide a shard key and shards also the current data
[08:36:33] <Nodex> mongodump to each machine and restore it
[08:38:30] <_{Dark}_> ok Nodex, I will try
[08:38:32] <_{Dark}_> tnx
[09:27:27] <FluxiFlax2023> hi all I am getting this error : ailed: exception: terminating request: request heap use exceeded 10% of physical RAM
[09:27:27] <FluxiFlax2023> ...although my total RAM is 6GB...my dataset is 1.1GB...how can I overcome this limitation ?
[09:44:24] <kali> FluxiFlax2023: you can maybe help by $project-ing and $match-ing stuff very early in your pipeline
[09:45:15] <FluxiFlax2023> kali, sorry ..dont really know what that means..
[09:45:20] <FluxiFlax2023> :)
[09:45:50] <kali> FluxiFlax2023: show me your pipeline, and a typical doc them :)
[09:47:11] <FluxiFlax2023> kali, I am really new to mongodb and this is the first time I do use it ... I am using it with an open source product, I know the very basics of mongodb...but basicallly the product creates the db and hanles all access to it
[09:55:54] <kali> FluxiFlax2023: well, it looks like this "open source product" is somehow overusing or badly using mongodb's aggregation framework. you probably need to get in touch with the author
[09:56:45] <FluxiFlax2023> kali, did do so..nothing fruitful so far...thanks anyway
[09:56:56] <FluxiFlax2023> kali, no way to disable the 10% limitation ?
[10:00:49] <kali> FluxiFlax2023: not that i know of
[10:53:51] <vargadanis> hello everyone! I am fairly new to mongodb and I was curious if there is some docs on how to set up a typical mongodb env.
[10:54:06] <vargadanis> relatively small load system is what I am talking about
[11:06:33] <Nodex> have you installed it yet?
[11:08:16] <vargadanis> yup
[11:08:23] <vargadanis> well just the binary
[11:09:03] <vargadanis> followed the "first steps" docs on the mongodb website
[11:13:04] <vargadanis> regarding replica sets: does the limit of 12 members mean that there can be only 12 mongod instance running in a cluster?
[11:17:25] <Nodex> yes, but you said your project was small load so you don't need to worry about that
[11:17:29] <Nodex> + just yet
[11:29:06] <trupheenix> Nodex, what am I hearing this stuff about hyperdex being better than mongodb? their support is a joke. they barely have any drivers to boot.
[11:30:57] <Nodex> eh?
[11:38:06] <trupheenix> Nodex, never mind. read this http://hackingdistributed.com/2013/01/29/mongo-ft/
[11:42:40] <kali> booooooorrrrrrrring
[11:43:18] <kali> trupheenix: there's been one of these papers two or three times a year for a while
[11:43:50] <Nodex> LOL
[11:44:24] <Nodex> databases who ride from slating and trying to undermind successful ones will never ever ever triumph
[11:45:06] <Nodex> all they do is put MongoDB down in terms of performance and ACID .... if thier product is so great why has nobody heard of it :)
[11:45:34] <Nodex> A truly good product will shine thru no matter what
[13:24:33] <vargadanis> with mongodb can I create document references across databases?
[13:25:57] <leifw> vargadanis: http://docs.mongodb.org/manual/reference/database-references/#dbref yep, set the $db field
[13:27:23] <vargadanis> so basically if I carefully structure my DBs, DB-level write locks should not impact concurrent write performance as much as some bloggers claim (how crappy that is)
[13:28:54] <kali> vargadanis: the db-level write lock is rarely the issue
[13:29:01] <Nodex> you really should avoid dbrefs where possible
[13:29:11] <kali> yeah, i was coming to that :)
[13:29:38] <kali> vargadanis: dbref is only a convention. mongodb does not do anything clever with them
[13:31:49] <Nodex> http://theonion.github.io/fartscroll.js/ LMFAO
[13:32:29] <Nodex> "You want fart noises as you scroll? We've got you covered."
[13:34:19] <kali> Nodex: i don't know what to say
[13:35:06] <Nodex> the things people come up with amazes me
[13:35:22] <vargadanis> kali, what is usually the issue than?
[13:36:52] <kali> vargadanis: most of the time, people try to use mongodn as if it would be a relational engine, don't read the doc, and get bitten because it's not a relational engine
[13:37:33] <Nodex> + then blame MongoDB because it doesn't do [insert cool feature here]
[13:39:52] <vargadanis> well I am just researching my options.. I wanna learn more about NoSQL in general and MongoDB seemed like a good place to start
[13:40:08] <vargadanis> that's all
[13:41:10] <Nodex> yup
[13:41:30] <fxhp> So
[13:42:00] <fxhp> I implemented "search" on my web application by accepting a regex from user, and using that with $regex on the data field
[13:42:18] <fxhp> Anything I should look out for when it comes to injection?
[13:43:31] <kali> fxhp: performance, in this case, would be more worrying. injection is not a problem, as there is no way the user can "escape" from the regexp string.
[13:44:03] <kali> fxhp: unless you craft json queries by concatenating string
[13:50:00] <vargadanis> as for write concerns go: if it is very important to me to make sure that all the information is written into the DB that I send to it or I get an error of that write operation, which level of write concern should I use?
[13:50:01] <fxhp> kali: naw, I'm just placing the string as the argument to $regex
[13:50:33] <Nodex> vargadanis : by default the write concern is set to safe writes
[13:50:50] <Derick> (in recent driver versions)
[13:50:51] <kali> vargadanis: SAFE, and it's now the default
[13:51:13] <kali> vargadanis: so that people not reading the can no longer be bitten by this one :)
[13:51:17] <kali> the doc
[13:51:21] <fxhp> kali: I agree regex could cause performance issues with very big datasets, so a better solution should be found in the future, but it currently works decently and it only a few lines of code.
[13:51:41] <kali> fxhp: you're fine then
[13:52:29] <Nodex> + in recent versions (sorry)
[13:58:38] <vargadanis> do you guys know of some use case studies regarding MongoDB?
[13:58:46] <vargadanis> also: can a replica set contain multiple shards?
[13:59:12] <vargadanis> the question really is: how limiting that 12 replicas maximum in a replica set
[13:59:34] <kali> vargadanis: nope, it's the other way around. a shard is a replica set
[14:00:02] <Nodex> shards replicate top to bottom
[14:00:11] <Nodex> i/e 1 shard to many replica's
[14:02:05] <vargadanis> I see! alright .. so let's say I got 12 mongod instances in a replica set <-- that forms a shard, right?
[14:02:17] <kali> yeah, but 12 sounds extreme
[14:02:23] <kali> most people go for 2 or 3
[14:02:33] <vargadanis> I'm hardcore :D
[14:02:53] <kali> 12 replica suppose a huge read load
[14:03:19] <Nodex> vargadanis : a shard is exactly that .... it;'s a "shard" or "piece" of your total data
[14:03:26] <Derick> kali: they cause a lot of extra network overhead - and the election protocol doesn't really scale that well
[14:03:50] <Nodex> if you have 3 shards and you have equal shard keys then each shard will contain 1/3rd of your data
[14:04:00] <kali> Derick: i'm aware of that... i assume you were talking to vargadanis
[14:04:15] <Derick> kali: rather just in general :)
[14:04:17] <Nodex> if you have 3 replica sets on each shard then you still have 1/3rd on each shard but it's backed up 3 times each
[14:05:00] <vargadanis> ahha! alright, I'm getting there
[14:05:17] <Nodex> shards allow read/write scaling and also for parts of your data to be offline whilst keeping the remainder of your app online (i/e the other 2/3rd's)
[14:05:17] <kali> vargadanis: replica set is for 1/ high availibility and persistence, 2/ high read load. shards is for write scalability, and dataset size scalability in general
[14:06:05] <Nodex> replica sets do what kali just stated - mostly availability and read scaling... som,e people use them to have an exact copy of data for data warehouse tasks
[14:07:11] <vargadanis> such as analytics I suppose
[14:07:25] <Nodex> yeh that sort of thing
[14:08:01] <vargadanis> is mongod instance single threaded?
[14:08:20] <vargadanis> there seems to be a limitation to it because of the JS engine
[14:09:00] <kali> vargadanis: use of the js engine is discouraged in situation invloving concurrency
[14:09:02] <Derick> mongod is not single threaded
[14:09:17] <Derick> only currently the js engine is (sortof)
[14:09:23] <Nodex> map/reduce is single threaded iirc
[14:09:28] <kali> Derick: still true with v8 ?
[14:09:30] <Derick> because it uses the js engine
[14:09:32] <Nodex> well map/reduce in MongoDB
[14:09:46] <Derick> kali: yes and no. AFAIK, v8 allows it, but we don't make use if it yet
[14:09:49] <vargadanis> kali, I read somewhere that in v2.4 it can execute multiple JS commands
[14:10:04] <Derick> (could be wrong there though)
[14:10:10] <Nodex> avoid JS commands where possible
[14:10:15] <Derick> yup
[14:10:15] <kali> yeah, one of you is wrong :)
[14:10:45] <vargadanis> The switch to V8 improves concurrency by permitting multiple JavaScript operations to run at the same time.
[14:10:48] <vargadanis> http://docs.mongodb.org/manual/release-notes/2.4-javascript/
[14:10:59] <Derick> vargadanis: yes, it *permits* it... let me ask
[14:11:09] <vargadanis> from whom?
[14:11:42] <vargadanis> ohh.. he seems to have a pretty icon next to his name on this channel :D
[14:11:59] <kali> vargadanis: yeah, he has knowledgeable colleagues.
[14:12:12] <vargadanis> and an e-mail address with "xdebug" in it hehe
[14:12:18] <Derick> yes, that too
[14:12:38] <vargadanis> ohh
[14:12:45] <vargadanis> well you make my days go easier
[14:12:51] <vargadanis> thank you for that
[14:12:54] <whiskeynerd> I've just started learning node.js and mongodb and having a problem querying even after looking at the reference, can I get some help please?
[14:12:54] <Derick> np!
[14:12:56] <whiskeynerd> http://pastebin.com/wjgRNvS1
[14:13:23] <whiskeynerd> when I use toArray() on find() undefined is returned and the callback within toArray() does nothing
[14:14:08] <vargadanis> whiskeynerd, well console tells you to set a default write concern
[14:14:20] <Derick> whiskeynerd: do you get your two messages before your find?
[14:14:49] <kali> async nightmare at its best
[14:14:52] <whiskeynerd> which two messages?
[14:15:02] <vargadanis> could it be that the writes have only been buffered not actually be committed and that's why it cannot be found?
[14:15:17] <Derick> whiskeynerd: "Poop inserted"
[14:15:23] <whiskeynerd> ya I get those
[14:15:29] <Derick> but before your find?
[14:15:29] <Derick> ({"id": "1"
[14:15:33] <Derick> you probably want _id
[14:15:35] <whiskeynerd> yes
[14:16:00] <whiskeynerd> well even if I don't pass an arg I still have the same problem
[14:16:06] <whiskeynerd> or if I use a different key
[14:17:14] <Derick> vargadanis: all JS ops are indeed no longer serialised now in 2.4
[14:17:42] <vargadanis> YAY for the docs reading :)
[14:18:20] <vargadanis> bb
[14:18:22] <Nodex> whiskeynerd : what happens on just find();
[14:18:28] <Nodex> with no params?
[14:18:37] <whiskeynerd> same thing
[14:20:15] <Derick> whiskeynerd: it might be that it's just too fast and the inserts haven't been made yet by the time you run find
[14:20:59] <whiskeynerd> Derick: I had done the insert stuff in a different file before and the query in another and ran them separately and had the same problem
[14:21:10] <whiskeynerd> just combined the operations into one file for easier pastebin
[14:21:18] <Derick> add a console.log just before and after the find?
[14:21:24] <Nodex> whiskeynerd : can you go on the shell and see if the data made it?
[14:21:42] <Derick> also, what sort of whiskey do you like? /me is more of a whisky fan
[14:22:15] <whiskeynerd> canadian club. cheap but still tasty. you?
[14:23:17] <whiskeynerd> i haven't learned to use the shell yet. would it just be >mongo databaseName
[14:23:19] <whiskeynerd> ?
[14:23:36] <Derick> whiskeynerd: scots single malts I'm afraid ;-)
[14:23:44] <Derick> whiskeynerd: and yes, "mongo dbname"
[14:23:58] <Nodex> db.persons.find();
[14:25:35] <whiskeynerd> I get this error when trying to connect to the db through shell http://pastebin.com/V48yGf0Y
[14:26:16] <Nodex> means your server is not running
[14:26:25] <Nodex> netstat -apn | grep 27017
[14:26:50] <Nodex> arf - winblows ... not sure how to on that
[14:28:31] <whiskeynerd> damn getting an error 1067 when trying to start it
[14:28:50] <whiskeynerd> I had followed the guide to set it up as a windows service so I thought it worked ill need to research this
[14:30:18] <whiskeynerd> Looks like I don't have enough space on my C drive... I don't have anything to get rid of though.
[14:30:22] <whiskeynerd> thanks for your help guys
[14:32:25] <whiskeynerd> I tried netstat --smallfiles before and it seemed okay but the service still says stopped. Oh well it was about time to reformat anyway
[14:37:38] <newbsduser> what is best way to check mongodb instance is up or down? ( localhost:27017) ??
[14:38:11] <newbsduser> iam using: mongo --eval "printjson(db.serverStatus())" $myhostname:$myport 2>&1 |grep 'connect failed' |wc -l
[14:40:09] <Nodex> pid maybe?
[14:41:10] <newbsduser> Nodex, sometimes
[14:41:26] <newbsduser> pid is up but socket doesn't answer
[14:41:49] <newbsduser> iam trying to create a alarm script
[14:42:08] <newbsduser> for cron.. it will try to get status and if it fails it will reset mongodb instance
[14:42:13] <newbsduser> what do u think?
[14:42:21] <Nodex> I would use netcat or something tbh
[14:42:35] <kali> i would use nagios
[14:43:50] <newbsduser> why there is not timeout value for mongo command
[14:44:08] <newbsduser> mongo --eval "printjson(db.serverStatus())" $myhostname:$myport it hangs.. if socket doesnt answer
[15:36:04] <cofeineSunshine> hello
[15:36:07] <cofeineSunshine> have app
[15:36:32] <cofeineSunshine> and out of the blue some test fail due to error: raise ConnectionFailure(str(e))
[15:36:35] <cofeineSunshine> ConnectionFailure: connection closed
[15:36:53] <Nodex> hold, I'll get my crystal ball
[15:37:09] <cofeineSunshine> my question would be is there any tools to monitor connection count
[15:37:25] <cofeineSunshine> actually n00b here, came from relational database world
[15:37:34] <cofeineSunshine> where to start
[15:37:47] <cofeineSunshine> first impression is like there is no available connections
[15:38:21] <cofeineSunshine> looks like we have more and more funtionality/tests
[15:38:34] <cofeineSunshine> and DB fails due to unavailable connection
[15:39:00] <Nodex> are you doing silly things like closing your connections ?
[15:39:11] <cofeineSunshine> nope
[15:39:17] <Nodex> what driver?
[15:39:20] <Nodex> which*
[15:39:21] <cofeineSunshine> pymongo
[15:39:29] <cofeineSunshine> mongokit ontop
[15:41:18] <Nodex> can you connect to the shell or is it global to that too?
[15:48:30] <cofeineSunshine> yes
[15:48:40] <cofeineSunshine> Nodex: i can connect to the shell during tests
[15:51:02] <cofeineSunshine> WARNING: soft rlimits too low. Number of files is 256, should be at least 1000
[15:51:08] <cofeineSunshine> could this be?
[15:51:16] <cofeineSunshine> running on osx
[15:51:22] <cofeineSunshine> not under root
[15:51:32] <kali> yes, definitely
[15:52:34] <kali> is your test suite instantiating the mongoclient at the beginning or for every test case ?
[17:27:28] <leifw> can you enableSharding on the local database?
[17:28:09] <leifw> never mind, you cannot.
[17:43:00] <Bluetegu> Hi. what is the best way to find only middle element in an array from each document? I can get the first or the nth element using $slice, but each document has an array of different size. Thanks.
[17:53:50] <WarDekar_> hi i'm trying to turn on auth and remote connect but i'm having difficulties, on the system DB i added a user with a role as "userAdmin"
[17:53:57] <WarDekar_> but i still can't login with that user i get login failed exception
[17:55:14] <Nodex> Bluetegu : you can get array elements with numbered indice
[17:55:39] <Nodex> example .... foo : ['a','b','c'] ....-> foo.0 || foo.1 || foo.2
[17:57:46] <WarDekar_> anyone?
[18:26:28] <MANCHUCK> I am trying to do a rolling update with my servers. I have the primary on 2.2.3 and a secondary on 2.4
[18:26:46] <MANCHUCK> when the secondary trys to get the replica set i get an error about the system index
[18:26:52] <MANCHUCK> ERROR: exception: Unknown index plugin 'account' in index { 0: "account" } on: { ts: Timestamp 1368123315000|2, h: -1851603342263879167, v: 2, op: "i", ns: "pla_auth.system.indexes", o: { _id: ObjectId('518be7a638625ce734000001'), ns: "pla_auth.users", key: [ "account" ], background: true, name: "0_account" } }
[18:27:06] <MANCHUCK> the only index on that collection is the _id index
[19:19:46] <starfly> MC: Perhaps our mileage varies on rolling upgrades; I'm sure there are scenarios where the oplog or RS dependent things change between releases, so e.g. you can't play a lower version oplog into a replica which is expecting a slightly different oplog structure. Not speaking from direct knowledge about the two releases in question, but musing. Probably best to report the issue, good luck.
[20:14:18] <dgarstang> I'm using the REST API... like this: curl http://apex.hub.foo.com:28017/truthdb/ec2_inventory/?filter__id=i-1415d559 ... output has offset, total_rows etc... which is too much info for upstream parser... can this be slimmed down?
[20:17:41] <akrs> hey guys - i'm using the aggregation framework to get articles sorted by number of comments (array on article object), but all i'm getting back is the _id and the commentsLength. how do I get back the full objects as well?
[20:18:25] <andredublin> what commands are you using now to produce your output?
[20:21:31] <mgriffin> why can't i reuse obj with find but can with findOne? http://privatepaste.com/9d43a43d99
[20:23:59] <scottbessler> find returns a cursor not an object or array
[20:25:34] <TommyCox> akrs: You use the project command after the aggregation to output the other wanted fields
[20:25:52] <scottbessler> i'm suddenly seeing corruption issues in mongo logs after restoring an ebs snapshot of a (journal-enabled) slave..
[20:26:17] <scottbessler> i've been restoring previous days snapshot daily for the last months and this seems possibly correlated to going from 2.2.x to 2.4.x
[20:26:21] <scottbessler> but that could be coincidental
[20:26:41] <dgarstang> no one here uses rest api? :(
[20:26:50] <scottbessler> is EBS snapshot not safe anymore? i thought it was recommended in the docs but then went looking and it doesnt seem to be mentioned anymore
[20:29:24] <akrs> andredublin: Article.aggregate({$unwind:'$comments'}, {$group:{_id:'$_id', commentTotal:{$sum:1}}}, {$sort:{commentTotal:-1}}, {$limit:10}, function(err, articles){})
[20:33:41] <andredublin> akrs: i think what you want to do is project after you unwind and group by post id and sum the comments
[20:35:17] <akrs> thanks!
[20:51:34] <hopeless> Hey... I have a document structure like this: { "film" : "<film_name>", "cinema" : "<show_time>"}
[20:52:06] <hopeless> Hey... I have a document structure like this: { "film" : "<film_name>", "cinema" : "<cinema_name>", "show_time": "<show_time>"}
[20:52:21] <hopeless> I want to produce:
[20:53:27] <hopeless> { "film" : "<film_name>", [ {"cinema":"<cinema_name>", "times":["<show_time>","<show_time>"...]}]}
[20:53:35] <hopeless> Does anyone please have suggestions on how to do this?
[20:53:43] <hopeless> I've spent hours on it and it's driving me bonkers.
[20:53:52] <hopeless> I looked at aggregate and mapreduce
[20:54:38] <hopeless> ...or is the solution to change my document structure?
[20:55:29] <hopeless> Anyone?
[20:55:40] <hopeless> <sigh> I am condemned to return to MySql.
[20:57:12] <mgriffin> well that escalated quickly
[21:04:55] <mgriffin> is db.showings.insert({film:"Sharks 15",cinema:"Dollar",showtimes:{showtime:"3:30",showtime:"7:30",showtime:"9:30"}}) the answer hopeless was looking for?
[21:06:17] <kali> mgriffin: i think the answer was "aggregation framework"
[21:06:31] <kali> mgriffin: maybe a hint about stacking two $group
[21:09:01] <kali> mgriffin: "aggregation framework" is the right answer to half of the questions asked here anyway :)
[21:10:41] <mgriffin> i see, he wanted multiple documents and "group by film, cinema"
[21:11:31] <mgriffin> so nesting as i did could work(?) but is different schema, and agg framework is the approach he really wanted do to application design
[21:11:33] <kali> group by film,cinema, and then group by film
[21:11:50] <mgriffin> s/do/due/
[21:12:03] <kali> i think so
[21:12:18] <mgriffin> thanks, that makes sense
[21:12:40] <kali> another general rule about the conversation here... people tend not to react nicely when you recommend a schema change :)
[21:13:16] <starfly> LOL! :)
[21:13:37] <mgriffin> luckily i am clueless and have an excuse ( http://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect )
[21:13:50] <mgriffin> isn't the point of mongo schema change ;)
[22:54:31] <Richhh> a good beginner tutorial to quickly get mongodb working?
[22:55:40] <mgriffin> you mean to install the service and start it?
[22:58:17] <scoutz> is it recommended to run the mongos processes on the app server or on the replica/sharding servers
[23:02:32] <nveselinov> Hi, Does anyone use mongodb on openvz?
[23:36:07] <Richhh> mgriffin yes
[23:36:19] <mgriffin> which os/distro?
[23:39:41] <adrivanrex_> hiii
[23:40:14] <adrivanrex_> anyone arround?
[23:41:08] <mgriffin> adrivanrex_: http://workaround.org/getting-help-on-irc
[23:41:15] <Richhh> atm im limited to this laptop with windows 8, and i only have a crappy free web host (000webhost) presumably running linux
[23:41:29] <adrivanrex_> I want to know how to get the row number ??
[23:41:40] <mgriffin> the _id is included by default
[23:41:42] <Richhh> no idea which distro
[23:41:57] <adrivanrex_> in INT?
[23:42:44] <mgriffin> Richhh: https://www.youtube.com/watch?v=hX5louVryOQ
[23:42:48] <Richhh> thx
[23:43:32] <mgriffin> adrivanrex_: the "collection" has an _id which is like a PK for a "table" in SQL
[23:45:53] <adrivanrex_> mgriffin I want to get this
[23:45:54] <adrivanrex_> http://sphotos-g.ak.fbcdn.net/hphotos-ak-snc6/282256_186136724877103_1458855832_n.jpg
[23:47:39] <adrivanrex_> mgriffin how to get this
[23:47:39] <adrivanrex_> http://sphotos-g.ak.fbcdn.net/hphotos-ak-snc6/282256_186136724877103_1458855832_n.jpg