PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 15th of July, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:26:16] <isthisreallife> please help me remove mongodb, http://pastie.org/9390975
[01:26:26] <isthisreallife> ubuntu
[01:30:47] <isthisreallife> anyone?:)
[08:04:36] <ziv> looking for help, I'm using simple php script to enter data into collection and every record inserted have the same "_id"
[08:04:53] <ziv> > db.user.find();
[08:04:53] <ziv> { "_id" : ObjectId("53c4de80045acdc6198b4567"), "email" : "sdsdf@dsfsadf.com" }
[08:04:53] <ziv> { "_id" : ObjectId("53c4de91045acdcb198b4567"), "email" : "sdsdf@dsfsadf.com" }
[08:13:10] <ziv> looking for help, I'm using simple php script to enter data into collection and every record inserted have the same "_id"
[08:13:49] <ziv> { "_id" : ObjectId("53c4e237045acd0d248b4567"), "email" : "1405411895x@x.xom" }
[08:13:49] <ziv> { "_id" : ObjectId("53c4e238045acd0f248b4567"), "email" : "1405411896x@x.xom" }
[08:18:16] <Nodex> congrats
[08:19:49] <Nodex> and those "_id's" are different just a FYI
[08:20:00] <ziv> this is my script:
[08:20:06] <ziv> $m = new \MongoClient();
[08:20:06] <ziv> $u['email'] = ['email' => time() . 'x@x.com'];
[08:20:06] <ziv> var_dump($m->x->y->insert($u));
[08:20:13] <Nodex> what's the problem?
[08:20:16] <ziv> and the results from mongo:
[08:20:23] <ziv> > db.y.find();
[08:20:24] <ziv> { "_id" : ObjectId("53c4e395045acd92248b4567"), "email" : { "email" : "1405412245x@x.com" } }
[08:20:24] <ziv> { "_id" : ObjectId("53c4e3a5045acd94248b4567"), "email" : { "email" : "1405412261x@x.com" } }
[08:20:24] <ziv> >
[08:20:24] <Nodex> use a pastebin
[08:20:30] <Nodex> and what's the problem?
[08:20:34] <ziv> the same id each time
[08:20:37] <Nodex> no it's not
[08:20:37] <ziv> this is the problem
[08:20:48] <Nodex> 53c4e3a5045acd94248b4567 != 53c4e395045acd92248b4567
[08:20:59] <ziv> you are so right!
[08:21:02] <ziv> sorry :)
[08:21:34] <ziv> thanks a lot, my old eyes...
[08:21:40] <Nodex> :)
[11:32:47] <shambat> juancarlosfarah: ok thanks
[11:33:08] <shambat> rspijker: that makes sense
[12:06:23] <Foad_NH> Hi, I have a question. In performance, sorting by natural, how much is better than sorting by date?
[12:09:49] <Nodex> natural is disk order iirc
[12:09:59] <Nodex> it's not always date order
[12:10:32] <Nodex> you get a free date lookup if you use objectId's as they're timestamp based
[12:14:51] <Foad_NH> Nodex: thank you, and about performance? if I use _id?
[12:15:29] <Nodex> it's indexed by default
[12:18:05] <Foad_NH> Nodex: yes, so it's good enough to use it on large collections.
[12:31:35] <Nodex> that's implicit if it's indexed
[12:31:41] <Nodex> it doens't get any faster than using an index...
[12:37:33] <Foad_NH> Nodex: thank you
[12:41:36] <Nodex> :)
[13:20:34] <umquant> What is the "mongo" way of doing a join? For example I have a document with 50 ints. Each int is representative of a memory address. In another document I define that memory address N = sensor Y. How would I do a query that wouuld link those?
[13:22:31] <Nodex> normally you would embedd those memory addresses and save the second query
[13:22:40] <bearclaw> an aggregate using an unwind on the ints, to have one document per int, and then a group?
[13:25:49] <umquant> Nodex, can you expand on that?
[13:26:05] <umquant> bearclaw, I will have to do some research. I am not familiar enough with mongo to follow your idea.
[13:27:27] <umquant> Here are my schemas: https://gist.github.com/anonymous/68df1d5a07f9e26a48bb
[13:30:45] <ghibe> hi iam using the mongodb 2.4.9 from the official ubuntu repos so i don't know if its already fixed, but in the config file /etc/mogodb.conf there's this line # Disable the HTTP interface (Defaults to localhost:27018), while it actually seems to be 28017
[13:34:06] <rasputnik> ghibe: yes that's a typo, it's usually 1000+ mongod port
[13:37:06] <ghibe> i was quite sure about it, iam asking because i don't know if post a bug report to mongodb or to the one that mantains the ubuntu repo.
[13:45:29] <rspijker> umquant: you can’t do an aggregate over multiple collections. So that won;t help you… You should go with Nodex’s suggestion and change your schema
[13:46:58] <umquant> rspijker, even if memory addresses are assumed by array index? Like values[0] = memory address 0
[13:48:14] <rspijker> umquant: not sure if I follow… How would that still be a problem if you embed?
[13:52:09] <dragoonis> Derick, ping
[13:58:31] <Derick> dragoonis: it's better to just ask your question - more change of people answering it
[13:59:19] <dragoonis> Renaming collection: results_staging.company_category_score_months to: results_staging.company_category_score_months_14_07_15_14_34_31
[13:59:19] <dragoonis> string(137) "Exception thrown: Failed to connect to: server.com:27017: Read timed out after reading 0 bytes, waited for 0.000000 seconds"
[14:00:08] <dragoonis> pecl extension version is: 1.5.4 ... getting this exception after
[14:00:08] <dragoonis> return $this->getMongoAdminDatabase()->command(array('renameCollection' => $from, 'to' => "$to"));
[14:00:23] <dragoonis> I wasn't getting this last week, but now I am
[14:01:47] <Derick> dragoonis: the operation probably makes too much time
[14:02:16] <dragoonis> It timed out after 0.000 seconds.. makes no sense.
[14:02:21] <Derick> that seems odd
[14:02:26] <Derick> it's probably not true
[14:02:32] <Derick> is this a web request, or CLI?
[14:02:34] <dragoonis> CLI
[14:02:39] <Derick> can you strace it then?
[14:02:45] <dragoonis> I will try
[14:02:51] <Derick> strace -o /tmp/strace.log php ...yourscript.php
[14:04:26] <dragoonis> Derick, you installed 'strace' on mac osx before ?
[14:04:46] <Derick> no, I don't think it has it
[14:04:52] <dragoonis> Google talks about "dtruss"
[14:06:46] <dragoonis> Derick, https://gist.github.com/dragoonis/bfe86bfd448ae9c60696
[14:07:37] <Derick> can you figure out how to add timestamps?
[14:08:00] <Derick> and this is rather useless...
[14:11:09] <dragoonis> Derick, this is as useful as I could get it using dtruss - https://gist.githubusercontent.com/dragoonis/bfe86bfd448ae9c60696/raw/54e1dfde724b91938a48b7c76cd8e95c2d141249/gistfile1.txt
[14:11:32] <dragoonis> Derick, It successfully renames two collections beforehand, and on the third rename it bails
[14:25:21] <stefuNz> hi. i have some slow operations, but i don't want them in my logfiles, because this just increases the IO. how can i turn that off?
[15:11:03] <mango_> anyone know why either M102 or M202 haven't started yet?
[15:24:01] <juancarlosfarah> mango: Both courses start today at 17:00 UTC and it is currently 15:21 UTC.
[15:24:44] <mango_> juancarlosfarah: yes, I'm in GMT
[15:25:15] <mango_> took a day off work today to get started.
[15:25:40] <mango_> juancarlosfarah: do you know when new material gets published?
[15:27:02] <juancarlosfarah> mango: They should be up in an hour and half. Usually new material gets published a week after the past weeks material is published. So in this case it would be every Tuesday at 17:00 UTC (18:00 GMT)
[15:28:16] <juancarlosfarah> mango_: Basically when the current week's homework is due, the following week's material is published.
[15:32:19] <mango_> juancarlosfarah: ok thanks, I just need to figure when it's best to take a day off in UK
[15:32:46] <mango_> today was probably not an ideal day, I'll make it on Wednesdays from next week.
[15:46:36] <juancarlosfarah> mango_: Yes, probably Wednesdays would be best.
[15:46:46] <mango_> juancarlosfarah: thanks.
[16:32:10] <movedx> I'm trying to enable sharding on a collection. I've defined a hashed index, but when I attempt to activate sharding, MongoDB complains that there is a missing value in the key for a certain document, but the key in said document isn't empty.
[16:32:49] <movedx> Is there a general reason for this, or is it very specific?
[16:34:49] <movedx> Hmm, the documents it doesn't like do have two blank fields, but they're not part of the hashed index?
[17:27:34] <oblio_> so i'm trying to write a shell script which grabs some output from a query and then uses it - but i seem to be having an issue. i see that im supposed to wrap the command in printjson() but that doesn't give me results, it prints out things pertaining to the execution of the query.
[17:29:33] <mango_> M102 and M202 have started :)
[18:23:25] <daidoji> anyone use mongoengine and tests in here?
[18:24:47] <mango_> what is mongoengine?
[18:25:43] <daidoji> its a python ORM type thing written on top of pymongo for integration with Django ORM features I think
[18:26:26] <daidoji> anyways, they suggest some weird voodoo that has been incorporated into this project, but I'm having trouble overriding it when trying to write tests
[18:34:31] <daidoji> oops, others have been here before me, http://stackoverflow.com/questions/4774800/mongoengine-connect-in-settings-py-testing-problem
[18:50:24] <Yahkob> Anyone around that uses mongoose? I guess this could be answered by someone who hasnt used mongoose really but having trouble adding to a sub document
[18:50:41] <Yahkob> http://stackoverflow.com/questions/24752339/pushing-data-to-sub-document-with-mongoose
[19:08:21] <arinel> anyone with experience with Go and mgo? I can't make session.UpsertUser() work. I can only login with mgo if I create a user via the mongo shell.
[19:25:20] <arinel> found the solution: update to the latest mgo code from their bzr repository
[19:31:49] <user55> hello! i would like to ask a question:
[19:32:09] <user55> when my PHP script calls MongoCollection::insert or MongoCollection::save with an undefined collection as a parameter
[19:32:17] <user55> mongo creates it automatically
[19:32:26] <user55> is it possible to disable this?
[19:33:36] <arinel> user55: not that I know of. You should be able to get an array of all available collections, then check if your collection is in this array prior to insert-ing or save()-ing
[19:34:22] <arinel> user55: or, you know, chek if the collection is "undefined" and don't call MongoCollection::insert()
[19:34:25] <arinel> *check
[19:34:50] <user55> yeah, so obvious
[19:35:31] <user55> but, does this affect the efficency of an update?
[19:36:15] <arinel> user55: yes, it's faster if you don't actually send commands to the database
[19:37:11] <user55> i'll keep it as plan 'B', thanks!
[19:37:34] <arinel> yeah, go ahead with your plan A to send junk to the db
[19:38:05] <user55> .. eh, thanks anyway
[19:38:23] <lucknerjb> arinel lol
[20:50:15] <blizzow> I'm trying to bring up a hidden member of a replica set with a very large collection in it. The hidden member is 2.6.3 and the rest of the cluster is 2.4.x . The new hidden member got about 800GB into syncing what looks like about 1.7TB of data. Then it died saying the following:
[20:50:32] <blizzow> WARNING: the collection 'mycollection.mycollection_20140712' lacks a unique index on _id. This index is needed for replication to function properly
[20:50:32] <blizzow> 2014-07-15T20:41:07.751+0000 [initandlisten] To fix this, you need to create a unique index on _id. See http://dochub.mongodb.org/core/build-replica-set-indexes
[20:51:30] <blizzow> I look at the indexes from that collection and it does NOT have a unique index on _id. But neither do any of the other collections. I'm worried that enabling a unique index on that collection will cause my cluster to grind to a halt.
[20:51:43] <blizzow> Anyone know how/what I can do to get the new 2.6.3 node fired up?