PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 25th of April, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:01:59] <ranman> jjsam: err don't think you had the full context for that, the act of serializing objects into json for v8 to be able to operate on them is slow
[00:11:22] <ranman> or into javascript objects rather
[01:29:54] <Favorlock> Hi there, can someone explain how to set an authenticated user with mongodb for the latest version? I've followed the manual but this is what happens: http://puu.sh/8mjXM.png This is user that I added: http://puu.sh/8mk3v.jpg Thanks to anyone who can help me :)
[03:51:08] <arrty> http://stackoverflow.com/questions/19446173/key-duplicates-in-the-array-in-mongodb-even-using-addtoset this is not working for me. I still have dupes with a unique index and doing an update with $addToSet
[04:25:31] <OilFanInYYC> Hey can anyone here answer a Node.js question about MongoDB?
[08:10:37] <thearchitectnlog> guys i got mongo db segmentation error while trying to insert some special characters in the db with node js and the db gone crazy
[08:11:03] <thearchitectnlog> i had to delete the document and restart the service
[08:19:29] <thearchitectnlog> anyone can help
[08:19:36] <thearchitectnlog> when i try to insert var comments = {'message':"xsbtd"};
[08:19:52] <thearchitectnlog> comments in the native drive node js i get segmentation error
[08:19:59] <thearchitectnlog> anyone knows anything about that
[08:22:43] <thearchitectnlog> ?
[08:26:34] <rspijker> thearchitectnlog: what segfaults?
[08:26:36] <rspijker> node or mongod?
[08:27:22] <thearchitectnlog> i dunno i get 51881 segmentation fault NODE_ENV=production node app.js
[08:27:48] <thearchitectnlog> while trying to answer a var sth = {'sdsdsd':"sdsds"}
[08:27:57] <thearchitectnlog> if i insert it directly in the function it works
[08:28:20] <Nodex> are your chars UTF8 encoded?
[08:28:28] <thearchitectnlog> omments.insert({'sdsdsd':"sdsds"}, {safe:true}, function(err, result) { => this works
[08:29:00] <thearchitectnlog> omments.insert({sth}, {safe:true}, function(err, result) { => this give segmentation error
[08:29:11] <thearchitectnlog> yes sure
[08:29:20] <thearchitectnlog> i m defining it in a var
[08:29:52] <Nodex> you're double encoding
[08:30:02] <thearchitectnlog> how is that
[08:30:09] <Nodex> var foo={bar:1} .... db.foo.insert(foo,...)
[08:30:22] <thearchitectnlog> yes that used to work
[08:31:15] <thearchitectnlog> var msg = { 'name':req.body.name,
[08:31:15] <thearchitectnlog> 'email': req.body.email,
[08:31:15] <thearchitectnlog> 'message':req.body.message,
[08:31:15] <thearchitectnlog> 'Date': Date.now() };
[08:31:16] <thearchitectnlog> messages.insert(msg, function(err, results){
[08:31:18] <Aswebbb> DOes anyone use Cassandra here?
[08:31:20] <Nodex> please use a pastebin
[08:31:25] <thearchitectnlog> ok
[08:31:30] <thearchitectnlog> that used to work
[08:32:01] <thearchitectnlog> where is the problem if i do var foo={bar:1} .... db.foo.insert(foo,...)
[08:32:02] <Nodex> when did it stop working?
[08:32:37] <thearchitectnlog> when i inserted some special characters but id deleted the document
[08:33:04] <Nodex> are you sure the document deleted?
[08:33:14] <thearchitectnlog> yes db.document.drop
[08:33:18] <thearchitectnlog> and made sure of it
[08:33:53] <Nodex> that;s not how you delete a single document
[08:33:57] <Nodex> that's how you drop a collection
[08:34:18] <blaubarschbube> hi folks. what is the common way of backing up mongodb collections? if possible i would prefer a solution with bacula. any suggestions?
[08:34:48] <Nodex> mongoexport / mongodump :)
[08:36:13] <thearchitectnlog> nodex i did nt understand yes i droped the collection
[08:36:29] <blaubarschbube> Nodex, thanks, gonna have a look at that
[08:37:00] <thearchitectnlog> nodex what do u mean by deleting the document
[08:37:02] <thearchitectnlog> ?
[08:37:17] <Nodex> [09:32:11] <thearchitectnlog> when i inserted some special characters but id deleted the document
[08:37:21] <Nodex> [09:32:38] <Nodex> are you sure the document deleted?
[08:38:19] <Nodex> so did you drop the collection or delete the document?
[08:39:06] <thearchitectnlog> nodex what is the difference i dropped the collection
[08:39:50] <Nodex> ok good luck :)
[08:41:51] <thearchitectnlog> nodex what should i do now ?
[08:42:16] <Nodex> learn to answer simple questions then perhaps you will get help :)
[08:43:08] <thearchitectnlog> nodex i dont know the difference betwen droping a collection and deleteing a document
[08:43:22] <Nodex> evidently not
[08:43:27] <thearchitectnlog> i just did db.collection.drop
[08:46:31] <thearchitectnlog> so anyone can help here the error is weird when i define in a var and insert it give segmentation error the same object i insert directly in the db i t get success
[08:56:53] <samotarnik> hi everyone. mongo newbie here. i'm scraping some pages and saving them into mongo in batches, doing a basic bulk insert with an array of documents. sometimes i hit the insertion limit of 16 MB and loose the data. so i have two questions now. firstly, does the limit hitting mean, that my whole batch (i.e array) exceeds 16 MB, or could it be that a *single document* is bigger than 16 MB alone? (i have no way of knowing that
[08:56:53] <samotarnik> up front). secondly, how does one generally handle such an error? could i do sth. on the db level, or do i have to take care of it on the app level?
[08:58:40] <Nodex> single document cannot exceed 16mb
[08:59:06] <Nodex> they must be very large web pages to hit that limit...
[09:00:00] <samotarnik> nodex: are you saying, that a single document in the batch exceeded that limit? are you *sure* of that? (because this is very strange indeed)
[09:00:43] <Nodex> yes a single document
[09:00:49] <samotarnik> nodex: in other words i could do an insert of let's say 30 MB as long as individual documents in the array would not exceed 16 MB?
[09:01:04] <Nodex> NO. each document has a 16mb limit period
[09:02:23] <samotarnik> Nodex: sorry, but i don't follow. i have an array of documents (objects).
[09:02:52] <samotarnik> Nodex: (which i am inserting)
[09:03:37] <samotarnik> http://docs.mongodb.org/manual/reference/method/db.collection.insert/
[09:04:06] <samotarnik> Nodex: says here: "<document or array of documents>"
[09:04:47] <Nodex> so what don't you understand?
[09:05:14] <domae> is this how i search by _id in PHP? $collection->findOne(array('_id' => new MongoId('535a23d0089056e81b000039'))) I get NULL as answer, the id exists.
[09:06:01] <Nodex> domae : right collection?
[09:06:20] <domae> yup, i checked with find() before and copied the id
[09:06:34] <Nodex> can you pastebin your code?
[09:08:38] <samotarnik> Nodex: my question is: if i am doing an insertion of an array of documents---does the 16 MB limit apply to a single document in that array (i.e. the array itself could exceed e.g. 25 MB in a single insert) or to the array as a whole (the whole array must not exceed 16 MB) ?
[09:08:57] <Nodex> samotarnik : it applies to the document
[09:09:14] <Nodex> which means that ANY object in your array that is greater than 16mb will fail
[09:09:26] <Nodex> (seeing as they're all single documents)
[09:10:11] <domae> Nodex, http://pastebin.com/u1KTbMQd
[09:10:33] <samotarnik> Nodex: ok... do you maybe have a link or a source for that---i have to pass this information on?
[09:11:45] <Nodex> I don't sorry
[09:12:21] <Nodex> domae : the only thing I can think of is that the document did not hit the db before you query it. Try adding some write concern to your batch insert
[09:13:16] <samotarnik> Nodex: ok... tnx...
[09:13:17] <Nodex> your id's don't match either
[09:13:37] <Nodex> 535a23d0089056e81b000039 != 535a255d089056641e000039
[09:13:39] <rspijker> samotarnik: mongod has a message limit (48MB, iirc) and a dcoument size limit. The document size limit applies to documents that can be stored in the DB. The message size limit applies to messages sent to the DB. If your array exceeds 48MB, it will fail unless the driver you use splits it up. If one of the docs exceeds 16MB it can't be inserted and therefore it will fail as well
[09:16:01] <domae> Nodex, found the error, it was my own fault. I rewrite the testdata with the same script that reads it, resulting in different ids every time.
[09:16:36] <Nodex> !!
[09:18:08] <samotarnik> rspijker: ok, that makes sense...
[09:19:24] <samotarnik> rspijker: so the array could be bigger than 16 MB as long as any of the docs don't exceed that limit. but it can't be bigger than the message size limit (48 MB)..
[09:19:50] <samotarnik> rspijker: do you have a documentation source for that maybe?
[09:21:34] <rspijker> samotarnik: can't find it anywhere... might be that the message limit no longer exists
[09:21:48] <rspijker> the limits page in the docs does say that bulk operations can't exceed 1000 ops though
[09:21:59] <rspijker> http://docs.mongodb.org/manual/reference/limits/#operations
[09:22:16] <samotarnik> rspijker: yeah, i saw that...
[09:22:51] <rspijker> easiest is to test it, I suppose...
[09:23:14] <rspijker> just write a javascript function that creates two 15MB docs, put them in an array and do a bulk insert in some test collection
[09:23:55] <samotarnik> rspijker: but my array has more than 1000 docs, i am certain of that. and the error i get is that i exceeded the limit of 16 MB
[09:24:08] <samotarnik> rspijker: ok...
[09:24:16] <rspijker> well, could be that your driver knows of the 1000 doc limit
[09:24:21] <rspijker> and splits the array
[09:24:31] <rspijker> it can't split documents though...
[09:24:41] <samotarnik> rspijker: yeah...
[09:25:16] <samotarnik> rspijker, Nodex: thnx guys, will try it out...
[09:30:23] <rspijker> samotarnik: http://pastebin.com/WxdEM0Wd
[09:30:32] <rspijker> that works fine. Both docs should be around 10MB
[09:31:56] <samotarnik> rspijker: now, you're doing my work =P seriously: thank you...
[09:32:25] <rspijker> I wanted to know as well now... :P
[09:33:10] <samotarnik> rspijker: ok =)
[10:07:47] <Guest90553> hi
[10:09:24] <Guest90553> anyone here who can help with user management and mongodb?
[10:20:55] <Nodex> best to just ask the question generally :)
[10:22:46] <TeTeT> hi, discovered a problem with a query using the java driver mongo-java-driver-2.12.0.jar - but it is quite hard to reproduce. Isolated test programm did not trigger the bug, it seems to be related to previous events in a connection.
[10:23:44] <TeTeT> so the situation is: doing a query for the _id field in a collection. Works fine for any collection, but not for gridFS collection.
[10:24:05] <TeTeT> exception from driver is "can't load partial GridFSFile file"
[10:24:46] <TeTeT> but it only happens when I ran tests on the db before, when i just launch a separate program, it works fine
[10:25:25] <TeTeT> any suggestions on how to isolate the problem further? I've been thinking of using @Before and @After with the code on the tests
[10:32:35] <Guest90553> enabled auth=true, created admin user in admin db with roles [ "dbAdminAnyDatabase", "clusterAdmin" ], after login to mongo with admin user, the admin user is 'not authorized' to execute command 'createUser'
[10:34:47] <Guest90553> 'show users also doesnt work
[11:28:38] <TeTeT> on the problem I mentioned above, it seems that the exception is triggered when a GridFS file was ever stored in the collection. Then the driver seems to use GridFSFile, which throws the exception. Otherwise BasicDBObject seems to be used.
[11:49:08] <arussel> I've got: {"a": "x", "coll": [{"a1": x1, "a2":x2}, {"a1":y1, "a2": y2}]}
[11:50:23] <arussel> what operator should I use to find this doc based on the coll values ? should return if a1->x1,a2-> x2 but not if a1->x1, a2->y2
[12:01:29] <policerspnd> Hey guys, could you point me at some good tutorial for working with mongodb and php?
[12:01:35] <policerspnd> most of the tutorials I see are php/mysql
[12:01:54] <Derick> hmm, it's really quite simple - let me find something for you
[12:01:57] <Nodex> there is a lot of info on the php.net page
[12:02:02] <Nodex> the basics at least
[12:04:54] <Derick> policerspnd: http://docs.php.net/manual/en/mongo.tutorial.php
[12:05:09] <policerspnd> basically I want to get my hands dirty with it, we plan to use it on some project and I've gathered some opinions from different people, they told me mongo is good with handling massive amount of data, weaker in doing table operations, and good with location (gps) storage
[12:05:28] <Derick> sounds about right, but I'm biased
[12:05:32] <policerspnd> Thank you Derick.
[12:05:35] <policerspnd> I figured :)
[12:05:38] <Zelest> webscale ftw!
[12:05:47] <Derick> policerspnd: le me know if there are any questions, I work on the PHP driver and maps and geo stuff is a hobby
[12:06:19] <policerspnd> ha! awesome, I'll keep that in mind. thanks.
[12:06:36] <Zelest> Derick, off topic: is it hard to translate a GPS position to GLONASS?
[12:06:56] <Derick> Zelest: hmm, I thought they both used WGS84, so shouldn't they be the same?
[12:07:39] <Zelest> but fair enough :)
[12:08:19] <policerspnd> Does mongo returns the queries in a JSON format?
[12:08:26] <Derick> nope, it's not the same
[12:08:41] <Derick> policerspnd: no, the php driver will return associative arrays
[12:09:02] <policerspnd> so if I want it in a json format I'll have to create an interface
[12:09:03] <Derick> Zelest: http://en.wikipedia.org/wiki/GLONASS#Accuracy
[12:09:15] <Derick> policerspnd: it doesn't return queries either...
[12:09:20] <policerspnd> :o
[12:09:31] <Derick> it returns documents
[12:09:58] <policerspnd> I'll read about it soon. I guess there are plenty of parsers tho?
[12:10:16] <Derick> parsers? :)
[12:10:22] <Derick> sorry, I am not following
[12:10:42] <policerspnd> I submit a query to mongo, I get a document in return - am I correct so far?
[12:10:50] <Derick> almost
[12:11:10] <Derick> you get a cursor in return, iterating over a cursor (with foreach) returns the results as documents
[12:11:26] <Derick> if you use "findOne()" you get one document back
[12:11:35] <policerspnd> I see, each result is a document
[12:11:39] <Derick> yes
[12:11:48] <policerspnd> the cursor just goes through the collection of the documents I got in result
[12:11:57] <Derick> yes
[12:12:10] <Derick> it's like iterating over a mysql resultset
[12:12:54] <policerspnd> I see, well I wont bore you with the more detailed and techincal questions on how to read the document data itself, I assume there's plenty of info about it
[12:24:24] <Industrial> Hi. I have a collection named 486444193. show collections shows it
[12:24:38] <Industrial> db['486444193'].find throws a find of undefined error
[12:24:42] <Industrial> not sure :S
[12:26:35] <Nodex> http://docs.mongodb.org/manual/reference/limits/#Restriction-on-Collection-Names
[12:26:43] <Nodex> Collection names should begin with an underscore or a letter character, and cannot: ...
[12:27:05] <Nodex> however it does not Explicitly ban numbers as starting chars
[12:27:20] <blaubarschbube> hi. i am backing up collections with mongodump and bacula and that seems to work great. but what if i have to backup a real huge amount of data from a mongodb? is it possible to backup just the delta (incremental, differential) and to get that data from the slave to not to harm the performance of the system?
[12:27:56] <Industrial> Nodex: so the collection is there but I am unable to query it
[12:28:23] <Nodex> blaubarschbube : mongoexport takes an optional "q" parameter
[12:28:48] <Industrial> ah, db.getCollection()
[12:29:19] <blaubarschbube> Nodex, you are a real wiki, thanks :)
[12:30:25] <Nodex> :)
[12:30:58] <Nodex> you can do things like $gt :{_id:ObjectId(".....")} and it saves you creatin a new index
[12:31:10] <Nodex> as you get a free index on an ObjectId()
[12:31:26] <Industrial> Can I set the value of an object id to the value of another property on the document?
[12:31:35] <Industrial> right now I'm getting an immutable error
[12:31:47] <Nodex> not in a normal operation no
[12:32:02] <Industrial> I'll write an import/export script.
[12:32:06] <Industrial> then
[12:32:09] <Nodex> documents and not "self" aware
[12:32:25] <Nodex> unlike skynet
[12:39:35] <blaubarschbube> why does mongodump not provide a --slaveOk flag like mongoexport does?
[12:50:35] <toastedpenguin> fairly new to mongodb, looking to increase the size of a capped collection is this possible and if so how is it done?
[13:11:48] <Aswebbb> I was told to use Cassandra rather than mongodb
[13:11:52] <Aswebbb> any explanations?
[13:14:01] <foofoobar> Hi. I have a page where users can upload photos. The filename is generated with group.objectid + photo.objectid. I then palce this file on amazon s3.
[13:14:51] <foofoobar> As I read the mongoid is kind of a timestamp + machine id + X, so most of the bits in this id are in fact not random (which I want them to be so they are not guessable)
[13:15:32] <foofoobar> So in my understanding its not the best idea to use two objectids as a "random filename", is that correct?
[13:17:06] <Derick> correct
[13:17:48] <foofoobar> All right, then I'm going to rewrite some stuff now :) thanks
[13:40:14] <blaubarschbube> Nodex, i dont get it. how can i with mongoexpert json query backup only delta?
[13:41:18] <Nodex> your app needs to take care of what is or isn;t delta then you can uery for it
[13:41:22] <Nodex> query*
[13:42:18] <blaubarschbube> but in the moment where i make a query and get delta, i could also make a full backup
[13:42:34] <blaubarschbube> i dont see the benefit
[13:43:21] <blaubarschbube> okay, i have a benefit in storage. but the main point for me is performance
[13:44:45] <Nodex> I don't undertsand what the problem is sorry
[13:45:07] <Nodex> you asked for delta, I gave you a way... not sure what else you want
[13:46:50] <blaubarschbube> Nodex, yes, i see, its my fault :) my bad english..
[13:47:31] <blaubarschbube> but your hint was interesting for me for another problem
[13:50:17] <blaubarschbube> my problem: i want to backup some really big collections without harming the performance. i thought it might be a bad idea to make a full backup once a day. its a production server and i dont want to make the developers in my company getting angry
[13:51:52] <blaubarschbube> so i hoped that there exists a possibility to just backup delta. im quite new to mongodb and dont really know the mechanisms.
[13:52:11] <Nodex> there is no built in method
[13:53:58] <Nodex> I use two methods currently. 1. Every time a record is updated I push the database + collection + _id to a redis hash which gets looped at intervals in a queue reading the document into a json file (or json array), zips it up and put's it offsite
[13:54:29] <Nodex> 2. in my driver wrapper I append a "_last_updated" timestamp so I can delta anything I like from any point in time
[14:13:39] <blaubarschbube> Nodex, sounds nice. but for now i think that will be to complex for me. but this could be the next level.
[14:15:33] <blaubarschbube> is mongodump using slaveok by default? i cant find a command line parameter
[14:16:06] <Nodex> I don't know sorry
[14:22:01] <Derick> there is no parameter
[14:22:34] <Derick> theoretically, this should work:
[14:23:26] <Derick> mongodump mongodb://localhost/dbname?readPreference=secondary
[14:24:10] <Derick> but it does not
[14:24:20] <Derick> actually, I get a crash :-)
[14:28:38] <Derick> blaubarschbube: no, it does not know anything about replicasets. You just need to point it to a secondary node.
[14:36:30] <blaubarschbube> Derick, thanks, im just trying what you suggested
[14:36:58] <Derick> blaubarschbube: no, that --host thing doesn't work
[14:37:06] <Derick> you just need to give it the IP/port of the secondary node
[14:38:43] <blaubarschbube> ok. but i need to know which ip is primary and wich is secondary since this can swap. or do i missunderstand something?
[14:39:05] <blaubarschbube> yes, i dont have three but just two server
[14:39:15] <Derick> no, you understand correctly
[14:39:22] <Derick> you can script that easily
[14:39:25] <Derick> one sec, I'll look it up
[14:40:29] <Derick> mongo --quiet --eval 'printjson(db.isMaster().ismaster);'
[14:40:36] <Derick> will say "true" if you're connecting to a master
[14:40:43] <Derick> you can use that in a shell script to then pick the other node
[14:41:25] <blaubarschbube> sounds good
[14:51:52] <ben2_> I've been running a 3 member replica set for months v2.4.6 with no issues. I recently upgraded both of the secondary members to 2.6 and last night at about 6:40 am one of them quit. The only log output was this: 2014-04-25T06:48:00.029-0400 [conn4219] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "cgProd0", v: 20, pv: 1, checkEmpty: false, from: "cgmo0:270
[14:51:53] <ben2_> 17" } ntoreturn:1 keyUpdates:0 numYields:0 reslen:150 108ms
[14:51:53] <ben2_> 2014-04-25T06:48:00.718-0400 [conn4207] command admin.$cmd command: serverStatus { serverStatus: 1 } keyUpdates:0 numYields:0 locks(micros) r:3880 reslen:3523 183ms
[14:52:48] <ben2_> The strange thing is that I have a 3 member TokuMx set running elsewhere and that's based on v2.4.9 and this happens with that to and prints a similar final log message and shuts down at around 6:40 am each morning.
[14:53:32] <ben2_> Does anyone know of an issue introduced in 2.4.9+ that may have caused this or if there's any way I can get more information about what's happening? I'm on Ubuntu 13.10 x64
[14:57:07] <thearchitectnlog> anyone can help me i am inserted some special character from node to mongo db now when i try to insert an object that i define in the scope of the document i get segmentation fault
[14:58:01] <thearchitectnlog> i droped the collection by using db.collection.drop
[15:03:18] <boo1ean> Hi guys, I'm trying to insert doc with geometry to 2dshpere-indexed collection but get "malformed geometry" error
[15:03:38] <boo1ean> here is geojson http://pastebin.com/TFF5sYPq
[15:03:47] <boo1ean> also it passes http://geojsonlint.com/
[15:03:53] <boo1ean> mongo version 2.6.0
[15:04:54] <boo1ean> haaalp!
[15:05:06] <Derick> yeah, that is a tricky one
[15:05:19] <Derick> check whether no two nodes are the same after each other
[15:05:23] <Derick> is it self-intersecting?
[15:05:38] <boo1ean> if you mean start = end then yes
[15:06:18] <Derick> no, http://en.wikipedia.org/wiki/Complex_polygon
[15:06:59] <cheeser> start != end isn't a polygon. that's just a line. ;)
[15:07:21] <Derick> yup
[15:08:17] <boo1ean> But polygon should end with start point, right (to be valid for mongo)?
[15:08:50] <Acatalepsy> Is anyone here on the mongodb-user group? Does it require some sort of confirmation before you can post?
[15:09:35] <Derick> first time posters require moderation I think
[15:09:36] <Acatalepsy> I've tried to post my issue twice and so far it's just vanished.
[15:09:45] <Acatalepsy> Welp.
[15:09:46] <Derick> boo1ean: that's wrong requirement
[15:10:02] <Derick> boo1ean: also, no two nodes can be the same in a row (although that I believe is getting fixed)
[15:10:20] <Acatalepsy> Okay then, maybe you folks can help me:
[15:11:01] <thearchitectnlog> ?
[15:11:47] <boo1ean> Derick: but if start != end geo doesn't pass 2dsphere index validation
[15:11:55] <Acatalepsy> I've got documents with a field and (sometimes!) an array, and I want to find all documents with a field that matches a given value, or whose array contains that value.
[15:12:23] <Acatalepsy> (Also whose timestamps fall in a given range)
[15:12:41] <d0x> hi, i have a json containing a full url and i like to extract the domain in an aggregation framework stage. Is that possible?? It seems that Regexgroups are not supported
[15:12:59] <Derick> Acatalepsy: you can just do a normal equality match for that
[15:13:12] <Derick> { foo: [ 1, 2, 3 ] }
[15:13:14] <Acatalepsy> Is there anything I can do with indexes ( or map-reduce? ) that will help me ensure very fast matching.
[15:13:20] <Derick> you can find that by using: find( { foo: 3 } );
[15:13:36] <Derick> if you put an index on foo, it works already
[15:13:47] <Acatalepsy> The query works just fine; my concern is performance.
[15:14:03] <cheeser> are you having a perf problem now?
[15:14:26] <Derick> Acatalepsy: just put an index on foo, it will multi-index in case its value is an array
[15:15:10] <Acatalepsy> cheeser: No, but then I don't have any actual data now.
[15:15:31] <Acatalepsy> Derick: Would it make sense to include the field in the array.
[15:16:08] <Acatalepsy> For example, to *just* have the array, and then when I extract the doc from mongo go document.field = document.array[0] ?
[15:16:30] <cheeser> i prefer that, personally.
[15:16:39] <cheeser> but i'm a strong type kinda guy.
[15:18:32] <boo1ean> Is there some reference where I can find all requirements to MultiPolygon?
[15:18:54] <boo1ean> It doesn't have duplicate nodes, and linter says it's ok
[15:22:35] <crombar> can anyone provide some information on the charges that apply for using the mms backup service? I just had an almost empty database estimated to be 150$. If that estimate is accurate, it would be a lot cheaper to run my own replica sets
[15:23:35] <cheeser> replica sets and backups are different things, fwiw
[15:24:11] <crombar> cheeser: I agree on that
[15:24:19] <Derick> Acatalepsy: I do prefer that too
[15:24:45] <Derick> boo1ean: I don't think you are using a multipolygon, just a normal one
[15:24:52] <thearchitectnlog> { native_parser: false } when i use this it works when i remove it it give segmentation error
[15:24:52] <Derick> you might need an extra set of [ ] around it
[15:25:06] <boo1ean> Derick: http://pastebin.com/TFF5sYPq
[15:25:22] <Derick> ok, there are 4
[15:25:33] <Derick> i wish we exposed the real s3 error message
[15:25:34] <crombar> still, I can set up quite a lot of stuff for 150$ these days and I don't understand the purpose of the free usage tier if an empty db will cost me 150$ a month
[15:25:39] <Derick> I had opened an issue for that in the past
[15:25:52] <boo1ean> Derick: if you put it to http://geojsonlint.com/ you'll se the map
[15:27:09] <Derick> boo1ean: and which mongodb version are you using?
[15:27:19] <boo1ean> Derick: 2.6.0
[15:27:23] <Derick> k
[15:27:25] <Derick> hmm
[15:28:18] <Derick> think I found something odd
[15:28:25] <Derick> in the river, you're sharing a point
[15:30:03] <boo1ean> Derick: I've checked for duplicates and didn't find
[15:38:07] <Derick> boo1ean: they can be so close together that they make the polygon microscopically intersecting
[15:44:07] <boo1ean_> Derick: I've same thoughts
[15:45:42] <dgray> Hello, was just wondering if someone could direct me to a reference of the mongodb wire protocol for the new 2.5.5+ features? (specifically, bulk write operations)
[15:47:39] <Derick> dgray: https://github.com/10gen/specifications/blob/master/source/write-commands.rst
[15:47:50] <dgray> 404
[15:48:07] <kali> Derick: it's private (i'm interested too, btw)
[15:48:09] <Derick> meh, you might have to be logged in and have permissions for 10gen :-/
[15:48:18] <dgray> that's pretty lame!
[15:48:34] <Derick> let me find out
[15:48:39] <Derick> because that is pretty lame :)
[15:50:41] <kali> mmm... it does not extend the wire protocol, but adds command, as far as i can tell
[15:51:14] <dgray> the wire protocol for update doesn't seem to allow for multiple update statements as far as I can tell...
[15:51:22] <Derick> kali: you're right
[15:51:42] <thearchitectnlog> after i inserted special character in mongodb from nodejs native driver i an inserted anymore object from node that i define in the scope of the document it s giving me segmentation error cn anyone plz help
[15:51:59] <kali> dgray: it's a command. it's materialized by a OP_QUERY on the $cmd collection
[15:52:06] <kali> dgray: just like findAndModify
[15:53:30] <dgray> OP_QUERY?
[15:53:32] <dgray> that's odd
[15:53:49] <Derick> commands are implemented as a findOne query )OP_QUERY) against the $cmd collection
[15:54:17] <dgray> ah, got it, okay.
[15:54:34] <dgray> thanks! I'll see where I can get from there
[15:55:09] <Derick> dgray: the php extension should have a very simple implementation in C for you to look at
[16:11:52] <dgray> Derick: I think I've figured it out, but how to I send "ordered"?
[16:11:57] <dgray> That's the one thing that doesn't fit
[16:13:38] <dgray> nevermind, ignore me
[16:15:46] <Acatalepsy> Just thought of something - what if a field contains either, say, a string, or an array of strings?
[16:16:12] <Acatalepsy> If you search by that field, will mongo do so efficiently?
[16:24:18] <martinbetz_> Hello, I have a question: I defined a model for Mongo with class RSSPost(db.Document): link = db.URLField(required=True). When I try to display this link in my jinja2 template, I get (<mongoengine.fields.URLField object at 0x1052a29d0>,) instead of the link itself. What do I do wrong?
[16:27:55] <martinbetz_> Do I have to add a __string__ as in Django?
[16:56:56] <ben2_> MMS is giving me a startup warning the collection lacks a unique index on _id. This index is needed for replication to function properly. When I do db.mycollection.ensureIndex({_id:1}, {unique:true}) it does nothing however. Doesn't the _id column have a unique index by default?
[17:54:51] <qx79> Hello. How would I change an admin user’s password? When I try to do “use admin” then db.addUser(“admin”, “newpassword”); I get "$err" : "unauthorized db:admin lock type:-1 client:127.0.0.1", "code" : 10057
[19:01:48] <retran> how much horsepower does MongoS need
[19:12:49] <arussel> the doc says: The upsert creates a document with data from both <query> and <update>
[19:13:06] <arussel> does it tries to do something from $elemMatch ?
[19:13:29] <arussel> or does it only use field:value from the query ?
[19:21:48] <skot> It uses anything the query parses as equality predicates.
[19:24:29] <skot> looks like $elemMatch is ignored.
[19:26:37] <skot> Which makes sense since you don't want any array-like stuff showing up in the new inserted doc
[21:11:36] <ChALkeR> Hi all.
[21:12:19] <ChALkeR> Using nodejs and mongodb-native, I get speed at 1 MB / second for 50 queries / second, average object size: 20k.
[21:12:34] <ChALkeR> Each request takes about 20ms.
[21:12:40] <ChALkeR> explain() shown 0ms.
[21:13:05] <ChALkeR> Db and the nodejs script are on the same machine.
[21:13:42] <ChALkeR> Where does it spend 20ms for each query and is there a way to optimize it?
[21:22:58] <kali> ChALkeR: i suggest you set the profiler at 5 or 10ms and look at the log: http://docs.mongodb.org/manual/tutorial/manage-the-database-profiler/
[21:23:27] <kali> ChALkeR: trying to decide if "explain" in the console and your production queries have a different behaviour
[21:26:24] <ChALkeR> kali: Hm, thank you.
[21:30:37] <ChALkeR> Strange, I made a separate nodejs-based testcase, and it became 5ms instead of 20ms. Need to debug my code more =).
[21:32:19] <ChALkeR> Ah, adding a {sort: {_id: -1}} slows it down.
[21:35:10] <ChALkeR> explain() shows a 'clauses' property with two identical objects.
[21:43:54] <ChALkeR> kali, Thank you, that gave me the correct profiling data.
[21:45:41] <ChALkeR> Yes, it used a different index.
[21:50:30] <ChALkeR> Solved this using the hint option.
[22:00:27] <chris_> i've been looking at some nodejs/mongodb tutorials and noticed that they are using collection names with the first letter capitalized, is this a normal convention for mongo?
[22:49:34] <unholycrab> how do i remove a monitoring agent from mongomms?
[22:49:58] <unholycrab> ah, i've found it
[22:53:40] <unholycrab> so, i have 4 separate database clusters that i want to monitor with MMS
[22:53:50] <unholycrab> they can't talk to eachother, so i can't use a single monitoring agent
[22:54:11] <unholycrab> can i aggregate all 4 clusters into mongomms?
[22:55:02] <unholycrab> i tried adding 2 agents in different regions to the same mongomms account, both are showing up on the mms panel, but only one is getting data
[22:55:15] <unholycrab> and that one agent can only get data about the cluster its attached to
[22:58:22] <unholycrab> do i need to spin up a server for the agent that can talk to all 4 clusters?
[23:36:55] <unholycrab> any way to manually specify replica set members within MMS?
[23:37:28] <unholycrab> if i add the primary, it will add the secondary hostsnames based on what is entered into the rs.conf... but they are local IPs and the mms agent can't connect on those addresses
[23:48:09] <joannac> unholycrab: you can either set preferred addresses, or reconfig
[23:51:09] <unholycrab> joannac: preferred addresses?